Overview

Recap

Meta-ecosystems have been studied looking at meta-ecosystems in which patch size was the same. However, of course, we know that meta-ecosystems are mad out of patches that have different size. To see the effects of patch size on meta-ecosystem properties, we ran a four weeks protist experiment in which different ecosystems were connected through the flow of nutrients. The flow of nutrients resulted from a perturbation of the ecosystems in which a fixed part of the cultures was boiled and then poored into the receiving patch. This had a fixed volume (e.g., small perturbation = 6.75 ml) and was the same across all patch sizes. The experiment design consisted in crossing two disturbances with a small, medium, and large isolated ecosystems and with a small-small, medium-medium, large-large, and small-large meta-ecosystem. We took videos every four days and we create this perturbation and resource flow the day after taking videos. We skipped the perturbation the day after we assembled the experiment so that we would start perturbing it when population densities were already high.

We had mainly two research questions:

  • Do patch properties of a patch depend upon the size of the patch it is connected to?

  • Do metaecosystem properties of a meta-ecosystem depend upon the relative size of its patch?

Design

Lab work

Creation of high-density monocultures

23/3/22 PPM for increasing the number of monocultures in the collection.

24/3/22 Collection control. See monoculture maintenance lab book p. 47.

26/3/22 Increase of number of monocultures in the collection. To do so, take the best culture and make 3 new ones. See monoculture maintenance lab book p. 47.

1/4/22 Make PPM for high density monocultures. See PatchSizePilot lab book p. 5.

3/4/22 Make bacterial solution for high density monocultures. See PatchSizePilot lab book p. 8.

5/4/22 Grow high density monocultures. Make 3 high density monocultures for each protist species with 200 ml with 5% bacterial solution, 85% PPM, 10% protists, and 2 seeds. See PatchSizePilot lab book p. 10

10/4/2022 Check high density monocultures. Cep, Eup, Spi, Spi te were really low.

13/4/2022 Start of the experiment. See PatchSizePilot p. 33.

Things I could have done better

- Autoclave all the material in advance

- Get more high-density monocultures

- Decide in advance the days in which you are going to check the high-density monocultures and prepare bacteria in advance for that day so that if some of them crashed you are still on time to make new ones.

- Use a single lab book for also when you create PPM and check the collection.

- Make a really high amount of PPM, as you will need for so many different things (>10 L). Maybe also autoclave 1 L Schott bottles so that you don’t have to oxygenate whole 5 L bottles of PPM. I think that I should have maybe made even a 10 L bottle of PPM.

- According to Silvana protists take 4-7 days to grow. The fastest is Tet (ca 4 days) and the slowest is Spi (ca 7 days). Once that you grow them they should stay at carrying capacity for a bit of time I guess, as you can see in the monoculture collection. I should make sure I’m growing them in the right way. I think that maybe I should grow them 10 days in advance so that I could actually grow also the slow species if they crashed. What should I do if all of them crashed?

Data

Experimental cultures (culture_info)

This table contains information about the 110 cultures of the experiment.

culture_info = read.csv(here("data", "PatchSizePilot_culture_info.csv"), header = TRUE)
datatable(culture_info[,1:10],
          rownames = FALSE,
          options = list(scrollX = TRUE),
          filter = list(position = 'top', 
                        clear = FALSE))

Patch biomass & community density (ds_patches)

  • Every row is a patch at a certain time point.
  • The value of different videos have been already averaged.
time_points_ds = NULL
for (time_point in first_time_point:last_time_point) {
  
  load(here("data", "population", paste0(paste0("t",time_point),".RData")))
  
  time_points_ds[[time_point+1]] = pop_output %>%
    left_join(read.csv(here("data", 
                            "population_species_ID_Ble", 
                            paste0("species_ID_t", time_point, ".csv"))) %>%
                select(file,
                       Ble),
              by = "file") %>%
    left_join(read.csv(here("data", 
                            "population_species_ID", 
                            paste0("species_ID_t", time_point, ".csv"))) %>%
                select(file,
                       Cep:Tet),
              by = "file")
  
  if(time_point == 0) {
    
    time_points_ds[[time_point+1]] = time_points_ds[[time_point+1]][rep(row.names(pop_output),
                                                                        nrow(culture_info)), ] %>%
      arrange(file) %>%
      mutate(culture_ID = rep(1 :n_cultures,
                              times = n_videos_taken_t0))
  }
  
  time_points_ds[[time_point+1]] = time_points_ds[[time_point+1]] %>%
    select(file,
           culture_ID,
           bioarea_per_volume,
           indiv_per_volume,
           protist_species[1]:protist_species[n_protist_species]) %>%
    mutate(time_point = time_point,
           day = sampling_days[time_point + 1])
  
}

ds_patches = time_points_ds %>%
  bind_rows() %>%
  left_join(culture_info,
            by = "culture_ID")
ds_patches = ds_patches %>%
  filter(! culture_ID %in% ecosystems_to_take_off)
### Average videos & get rid of useless columns
ds_patches = ds_patches %>%
  group_by(
    culture_ID,
    patch_size,
    patch_size_volume,
    disturbance,
    metaecosystem_type,
    time_point,
    day,
    metaecosystem,
    system_nr,
    eco_metaeco_type
  ) %>%
  summarise(
    bioarea_per_volume = mean(bioarea_per_volume),
    indiv_per_volume = mean(indiv_per_volume),
    Ble = mean(Ble),
    Cep = mean(Cep),
    Col = mean(Col),
    Eug = mean(Eug),
    Eup = mean(Eup),
    Lox = mean(Lox),
    Pau = mean(Pau),
    Pca = mean(Pca),
    Spi = mean(Spi),
    Spi_te = mean(Spi_te),
    Tet = mean(Tet)
  ) %>%
  ungroup()
#Change all the measures to ml
#1. Bioarea_per_volume in the original data is in µm2 / µl -> convert to µm2 / ml
#2. Indiv_per_volume & all protists in the original data are in individuals / µl -> convert to individuals / ml
ds_patches = ds_patches %>%
  mutate(bioarea_per_volume = bioarea_per_volume / 1000,
         indiv_per_volume = indiv_per_volume / 1000,
         Ble = Ble / 1000,
         Cep = Cep / 1000,
         Col = Col / 1000,
         Eug = Eug / 1000,
         Eup = Eup / 1000,
         Lox = Lox / 1000,
         Pau = Pau / 1000,
         Pca = Pca / 1000,
         Spi = Spi / 1000,
         Spi_te = Spi_te / 1000,
         Tet = Tet / 1000)
#Calculate total response variable for the whole patch
ds_patches = ds_patches %>%
  mutate(bioarea_tot = bioarea_per_volume * patch_size_volume,
         indiv_tot = indiv_per_volume * patch_size_volume,
         Ble_tot = Ble * patch_size_volume,
         Cep_tot = Cep * patch_size_volume,
         Col_tot = Col * patch_size_volume,
         Eug_tot = Eug * patch_size_volume,
         Eup_tot = Eup * patch_size_volume,
         Lox_tot = Lox * patch_size_volume,
         Pau_tot = Pau * patch_size_volume,
         Pca_tot = Pca * patch_size_volume,
         Spi_tot = Spi * patch_size_volume,
         Spi_te_tot = Spi_te * patch_size_volume,
         Tet_tot = Tet * patch_size_volume)
#Calculate species dominance
ds_patches = ds_patches %>%
  mutate(Ble_dominance = (Ble / indiv_per_volume) * 100,
         Cep_dominance = (Cep / indiv_per_volume) * 100,
         Col_dominance = (Col / indiv_per_volume) * 100,
         Eug_dominance = (Eug / indiv_per_volume) * 100,
         Eup_dominance = (Eup  / indiv_per_volume) * 100,
         Lox_dominance = (Lox / indiv_per_volume) * 100,
         Pau_dominance = (Pau / indiv_per_volume) * 100,
         Pca_dominance = (Pca / indiv_per_volume) * 100,
         Spi_dominance = (Spi / indiv_per_volume) * 100,
         Spi_te_dominance = (Spi_te / indiv_per_volume) * 100,
         Tet_dominance = (Tet / indiv_per_volume) * 100)
#Calculate alpha diversity (shannon, simpson, inv simpson, evenness)
ds_patches$species_richness = NA
ds_patches$shannon = NA
ds_patches$simpson = NA
ds_patches$inv_simpson = NA
ds_patches$evenness_pielou = NA


for (row in 1:nrow(ds_patches)) {
  
  species_vector = ds_patches %>%
    slice(row) %>%
    select(protist_species[1]:protist_species[n_protist_species])
  
  ds_patches[row,]$species_richness = species_vector %>%
    mutate_at(vars(protist_species),
              funs(ifelse(. > 0,
                          yes = 1,
                          no = 0))) %>%
    rowSums()
  
  ds_patches[row, ]$simpson = diversity(species_vector,
                                        index = "simpson")
  
  ds_patches[row, ]$shannon = diversity(species_vector,
                                        index = "shannon")
  
  ds_patches[row, ]$inv_simpson = diversity(species_vector,
                                            index = "invsimpson")
  
  ds_patches[row, ]$evenness_pielou = diversity(species_vector) / 
                                      log(specnumber(species_vector))
  
}
## Warning: Using an external vector in selections was deprecated in tidyselect 1.1.0.
## ℹ Please use `all_of()` or `any_of()` instead.
##   # Was:
##   data %>% select(protist_species)
## 
##   # Now:
##   data %>% select(all_of(protist_species))
## 
## See <https://tidyselect.r-lib.org/reference/faq-external-vector.html>.
## Warning: `funs()` was deprecated in dplyr 0.8.0.
## ℹ Please use a list of either functions or lambdas:
## 
## # Simple named list: list(mean = mean, median = median)
## 
## # Auto named with `tibble::lst()`: tibble::lst(mean, median)
## 
## # Using lambdas list(~ mean(., trim = .2), ~ median(., na.rm = TRUE))
#Add baselines (from t1).
baselines <- ds_patches %>%
  filter(time_point == 1) %>%
  group_by(culture_ID) %>%
  summarise(across(vars, .names = "baseline_{.col}"))
## Warning: Using an external vector in selections was deprecated in tidyselect 1.1.0.
## ℹ Please use `all_of()` or `any_of()` instead.
##   # Was:
##   data %>% select(vars)
## 
##   # Now:
##   data %>% select(all_of(vars))
## 
## See <https://tidyselect.r-lib.org/reference/faq-external-vector.html>.
ds_patches = full_join(ds_patches, baselines)
#Rename patches
ds_patches$eco_metaeco_type <- recode(
  ds_patches$eco_metaeco_type,
  "S" = "Small isolated",
  "S (S_S)" = "Small connected to small",
  "S (S_L)" = "Small connected to large",
  "M" = "Medium isolated",
  "M (M_M)" = "Medium connected to medium",
  "L" = "Large isolated",
  "L (L_L)" = "Large connected to large",
  "L (S_L)" = "Large connected to small"
)
#Reorder patches
ds_patches = ds_patches %>%
  mutate(eco_metaeco_type = factor(
    eco_metaeco_type,
    levels = c(
      "Small isolated",
      "Small connected to small",
      "Small connected to large",
      "Medium isolated",
      "Medium connected to medium",
      "Large isolated",
      "Large connected to large",
      "Large connected to small"
    )
  ))
saveRDS(ds_patches, file = here("results", "ds_patches.RData"))

Patch effect sizes (ds_patches_effect_size)

#Calculate the mean & sd of response variables for each treatment at each time point
ds_patches_effect_size <- ds_patches %>%
  group_by(disturbance, 
           eco_metaeco_type, 
           patch_size, 
           time_point, 
           day) %>%
  summarise(across(all_of(vars),
                   list(mean = mean, sd = sd)),
            sample_size = n())

#Initialise columns
for (var in vars) {
  ds_patches_effect_size <- ds_patches_effect_size %>%
    mutate(
      !!paste0(var, "_d") := NA,
      !!paste0(var, "_d_upper") := NA,
      !!paste0(var, "_d_lower") := NA
    )
}

#Define treatments and controls
treatments_and_controls = data.frame(
  treatment = c(
    "Small connected to small",
    "Small connected to large",
    "Medium connected to medium",
    "Large connected to large",
    "Large connected to small"
  ),
  control = c(
    "Small isolated",
    "Small isolated",
    "Medium isolated",
    "Large isolated",
    "Large isolated"
  )
)

n_of_treatments = nrow(treatments_and_controls)
#Calculate the effect size (Hedge's d)
row_n = 0
for (disturbance_input in c("low", "high")) {
  for (treatment_n in 1:n_of_treatments) {
    for (time_point_input in first_time_point:last_time_point) {
      row_n = row_n + 1
      
      eco_metaeco_treatment = treatments_and_controls$treatment[treatment_n]
      
      treatment_row = ds_patches_effect_size %>%
        filter(
          disturbance == disturbance_input,
          eco_metaeco_type == eco_metaeco_treatment,
          time_point == time_point_input
        )
      
      eco_metaeco_control = treatments_and_controls$control[treatment_n]
      
      control_row = ds_patches_effect_size %>%
        filter(
          disturbance == disturbance_input,
          eco_metaeco_type == eco_metaeco_control,
          time_point == time_point_input
        )
      
      ### Bioarea density
      
      hedges_d_bioarea = calculate.hedges_d(
        treatment_row$bioarea_per_volume_mean,
        treatment_row$bioarea_per_volume_sd,
        treatment_row$sample_size,
        control_row$bioarea_per_volume_mean,
        control_row$bioarea_per_volume_sd,
        control_row$sample_size
      )
      
      ds_patches_effect_size$bioarea_per_volume_d[
        ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
        ds_patches_effect_size$time_point == time_point_input &
        ds_patches_effect_size$disturbance == disturbance_input] = 
        hedges_d_bioarea$d
      
      ds_patches_effect_size$bioarea_per_volume_d_upper[
        ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
        ds_patches_effect_size$time_point == time_point_input &
        ds_patches_effect_size$disturbance == disturbance_input] = 
        hedges_d_bioarea$upper_CI
      
      ds_patches_effect_size$bioarea_per_volume_d_lower[
        ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
        ds_patches_effect_size$time_point == time_point_input &
        ds_patches_effect_size$disturbance == disturbance_input] = 
        hedges_d_bioarea$lower_CI
      
      ### Individuals per volume
    
    hedges_d_indiv_per_volume = calculate.hedges_d(
      treatment_row$indiv_per_volume_mean,
      treatment_row$indiv_per_volume_sd,
      treatment_row$sample_size,
      control_row$indiv_per_volume_mean,
      control_row$indiv_per_volume_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$indiv_per_volume_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_indiv_per_volume$d
    
    ds_patches_effect_size$indiv_per_volume_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_indiv_per_volume$lower_CI
    
    ds_patches_effect_size$indiv_per_volume_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_indiv_per_volume$upper_CI

    
    ### Species richness
    
    hedges_d_species_richness = calculate.hedges_d(
      treatment_row$species_richness_mean,
      treatment_row$species_richness_sd,
      treatment_row$sample_size,
      control_row$species_richness_mean,
      control_row$species_richness_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$species_richness_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_species_richness$d
    
    ds_patches_effect_size$species_richness_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_species_richness$upper_CI
    
    ds_patches_effect_size$species_richness_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_species_richness$lower_CI
    
    ### Evenness
    
    hedges_d_evenness_pielou = calculate.hedges_d(
      treatment_row$evenness_pielou_mean,
      treatment_row$evenness_pielou_sd,
      treatment_row$sample_size,
      control_row$evenness_pielou_mean,
      control_row$evenness_pielou_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$evenness_pielou_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_evenness_pielou$d
    
    ds_patches_effect_size$evenness_pielou_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_evenness_pielou$upper_CI
    
    ds_patches_effect_size$evenness_pielou_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_evenness_pielou$lower_CI
    
    ### Shannon Index
    
    hedges_d_shannon = calculate.hedges_d(
      treatment_row$shannon_mean,
      treatment_row$shannon_sd,
      treatment_row$sample_size,
      control_row$shannon_mean,
      control_row$shannon_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$shannon_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_shannon$d
    
    ds_patches_effect_size$shannon_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_shannon$upper_CI
    
    ds_patches_effect_size$shannon_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_shannon$lower_CI
    
    ### Ble
    
    hedges_d_Ble = calculate.hedges_d(
      treatment_row$Ble_mean,
      treatment_row$Ble_sd,
      treatment_row$sample_size,
      control_row$Ble_mean,
      control_row$Ble_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Ble_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Ble$d
    
    ds_patches_effect_size$Ble_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Ble$lower_CI
    
    ds_patches_effect_size$Ble_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Ble$upper_CI
    
    ### Cep
    
    hedges_d_Cep = calculate.hedges_d(
      treatment_row$Cep_mean,
      treatment_row$Cep_sd,
      treatment_row$sample_size,
      control_row$Cep_mean,
      control_row$Cep_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Cep_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Cep$d
    
    ds_patches_effect_size$Cep_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Cep$lower_CI
    
    ds_patches_effect_size$Cep_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Cep$upper_CI
    
    ### Col
    
    hedges_d_Col = calculate.hedges_d(
      treatment_row$Col_mean,
      treatment_row$Col_sd,
      treatment_row$sample_size,
      control_row$Col_mean,
      control_row$Col_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Col_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Col$d
    
    ds_patches_effect_size$Col_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Col$lower_CI
    
    ds_patches_effect_size$Col_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Col$upper_CI
    
    ### Eug
    
    hedges_d_Eug = calculate.hedges_d(
      treatment_row$Eug_mean,
      treatment_row$Eug_sd,
      treatment_row$sample_size,
      control_row$Eug_mean,
      control_row$Eug_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Eug_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Eug$d
    
    ds_patches_effect_size$Eug_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Eug$lower_CI
    
    ds_patches_effect_size$Eug_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Eug$upper_CI
    
    ### Eup
    
    hedges_d_Eup = calculate.hedges_d(
      treatment_row$Eup_mean,
      treatment_row$Eup_sd,
      treatment_row$sample_size,
      control_row$Eup_mean,
      control_row$Eup_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Eup_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Eup$d
    
    ds_patches_effect_size$Eup_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Eup$lower_CI
    
    ds_patches_effect_size$Eup_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Eup$upper_CI
    
    ### Lox
    
    hedges_d_Lox = calculate.hedges_d(
      treatment_row$Lox_mean,
      treatment_row$Lox_sd,
      treatment_row$sample_size,
      control_row$Lox_mean,
      control_row$Lox_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Lox_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Lox$d
    
    ds_patches_effect_size$Lox_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Lox$lower_CI
    
    ds_patches_effect_size$Lox_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Lox$upper_CI
    
    ### Pau
    
    hedges_d_Pau = calculate.hedges_d(
      treatment_row$Pau_mean,
      treatment_row$Pau_sd,
      treatment_row$sample_size,
      control_row$Pau_mean,
      control_row$Pau_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Pau_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Pau$d
    
    ds_patches_effect_size$Pau_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Pau$lower_CI
    
    ds_patches_effect_size$Pau_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Pau$upper_CI
    
    ### Pca
    
    hedges_d_Pca = calculate.hedges_d(
      treatment_row$Pca_mean,
      treatment_row$Pca_sd,
      treatment_row$sample_size,
      control_row$Pca_mean,
      control_row$Pca_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Pca_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Pca$d
    
    ds_patches_effect_size$Pca_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Pca$lower_CI
    
    ds_patches_effect_size$Pca_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Pca$upper_CI
    
    ### Spi
    
    hedges_d_Spi = calculate.hedges_d(
      treatment_row$Spi_mean,
      treatment_row$Spi_sd,
      treatment_row$sample_size,
      control_row$Spi_mean,
      control_row$Spi_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Spi_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Spi$d
    
    ds_patches_effect_size$Spi_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Spi$lower_CI
    
    ds_patches_effect_size$Spi_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Spi$upper_CI
    
    ### Spi te
    
    hedges_d_Spi_te = calculate.hedges_d(
      treatment_row$Spi_te_mean,
      treatment_row$Spi_te_sd,
      treatment_row$sample_size,
      control_row$Spi_te_mean,
      control_row$Spi_te_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Spi_te_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Spi_te$d
    
    ds_patches_effect_size$Spi_te_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Spi_te$lower_CI
    
    ds_patches_effect_size$Spi_te_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Spi_te$upper_CI
    
    ### Tet
    
    hedges_d_Tet = calculate.hedges_d(
      treatment_row$Tet_mean,
      treatment_row$Tet_sd,
      treatment_row$sample_size,
      control_row$Tet_mean,
      control_row$Tet_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Tet_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Tet$d
    
    ds_patches_effect_size$Tet_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Tet$lower_CI
    
    ds_patches_effect_size$Tet_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Tet$upper_CI
    
    ### Ble dominance
    
    hedges_d_Ble_dominance = calculate.hedges_d(
      treatment_row$Ble_dominance_mean,
      treatment_row$Ble_dominance_sd,
      treatment_row$sample_size,
      control_row$Ble_dominance_mean,
      control_row$Ble_dominance_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Ble_dominance_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Ble_dominance$d
    
    ds_patches_effect_size$Ble_dominance_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Ble_dominance$lower_CI
    
    ds_patches_effect_size$Ble_dominance_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Ble_dominance$upper_CI
    
    ### Cep dominance
    
    hedges_d_Cep_dominance = calculate.hedges_d(
      treatment_row$Cep_dominance_mean,
      treatment_row$Cep_dominance_sd,
      treatment_row$sample_size,
      control_row$Cep_dominance_mean,
      control_row$Cep_dominance_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Cep_dominance_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Cep_dominance$d
    
    ds_patches_effect_size$Cep_dominance_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Cep_dominance$lower_CI
    
    ds_patches_effect_size$Cep_dominance_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Cep_dominance$upper_CI
    
    ### Col dominance
    
    hedges_d_Col_dominance = calculate.hedges_d(
      treatment_row$Col_dominance_mean,
      treatment_row$Col_dominance_sd,
      treatment_row$sample_size,
      control_row$Col_dominance_mean,
      control_row$Col_dominance_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Col_dominance_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Col_dominance$d
    
    ds_patches_effect_size$Col_dominance_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Col_dominance$lower_CI
    
    ds_patches_effect_size$Col_dominance_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Col_dominance$upper_CI
    
    ### Eug dominance
    
    hedges_d_Eug_dominance = calculate.hedges_d(
      treatment_row$Eug_dominance_mean,
      treatment_row$Eug_dominance_sd,
      treatment_row$sample_size,
      control_row$Eug_dominance_mean,
      control_row$Eug_dominance_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Eug_dominance_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Eug_dominance$d
    
    ds_patches_effect_size$Eug_dominance_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Eug_dominance$lower_CI
    
    ds_patches_effect_size$Eug_dominance_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Eug_dominance$upper_CI
    
    ### Eup dominance
    
    hedges_d_Eup_dominance = calculate.hedges_d(
      treatment_row$Eup_dominance_mean,
      treatment_row$Eup_dominance_sd,
      treatment_row$sample_size,
      control_row$Eup_dominance_mean,
      control_row$Eup_dominance_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Eup_dominance_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Eup_dominance$d
    
    ds_patches_effect_size$Eup_dominance_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Eup_dominance$lower_CI
    
    ds_patches_effect_size$Eup_dominance_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Eup_dominance$upper_CI
    
    ### Lox dominance
    
    hedges_d_Lox_dominance = calculate.hedges_d(
      treatment_row$Lox_dominance_mean,
      treatment_row$Lox_dominance_sd,
      treatment_row$sample_size,
      control_row$Lox_dominance_mean,
      control_row$Lox_dominance_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Lox_dominance_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Lox_dominance$d
    
    ds_patches_effect_size$Lox_dominance_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Lox_dominance$lower_CI
    
    ds_patches_effect_size$Lox_dominance_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Lox_dominance$upper_CI
    
    ### Pau dominance
    
    hedges_d_Pau_dominance = calculate.hedges_d(
      treatment_row$Pau_dominance_mean,
      treatment_row$Pau_dominance_sd,
      treatment_row$sample_size,
      control_row$Pau_dominance_mean,
      control_row$Pau_dominance_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Pau_dominance_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Pau_dominance$d
    
    ds_patches_effect_size$Pau_dominance_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Pau_dominance$lower_CI
    
    ds_patches_effect_size$Pau_dominance_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Pau_dominance$upper_CI
    
    ### Pca dominance
    
    hedges_d_Pca_dominance = calculate.hedges_d(
      treatment_row$Pca_dominance_mean,
      treatment_row$Pca_dominance_sd,
      treatment_row$sample_size,
      control_row$Pca_dominance_mean,
      control_row$Pca_dominance_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Pca_dominance_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Pca_dominance$d
    
    ds_patches_effect_size$Pca_dominance_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Pca_dominance$lower_CI
    
    ds_patches_effect_size$Pca_dominance_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Pca_dominance$upper_CI

    ### Spi dominance
    
    hedges_d_Spi_dominance = calculate.hedges_d(
      treatment_row$Spi_dominance_mean,
      treatment_row$Spi_dominance_sd,
      treatment_row$sample_size,
      control_row$Spi_dominance_mean,
      control_row$Spi_dominance_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Spi_dominance_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Spi_dominance$d
    
    ds_patches_effect_size$Spi_dominance_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Spi_dominance$lower_CI
    
    ds_patches_effect_size$Spi_dominance_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Spi_dominance$upper_CI
    
    ### Spi te dominance
    
    hedges_d_Spi_te_dominance = calculate.hedges_d(
      treatment_row$Spi_te_dominance_mean,
      treatment_row$Spi_te_dominance_sd,
      treatment_row$sample_size,
      control_row$Spi_te_dominance_mean,
      control_row$Spi_te_dominance_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Spi_te_dominance_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Spi_te_dominance$d
    
    ds_patches_effect_size$Spi_te_dominance_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Spi_te_dominance$lower_CI
    
    ds_patches_effect_size$Spi_te_dominance_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Spi_te_dominance$upper_CI
    
    ### Tet dominance
    
    hedges_d_Tet_dominance = calculate.hedges_d(
      treatment_row$Tet_dominance_mean,
      treatment_row$Tet_dominance_sd,
      treatment_row$sample_size,
      control_row$Tet_dominance_mean,
      control_row$Tet_dominance_sd,
      control_row$sample_size
    )
    
    ds_patches_effect_size$Tet_dominance_d[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Tet_dominance$d
    
    ds_patches_effect_size$Tet_dominance_d_lower[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Tet_dominance$lower_CI
    
    ds_patches_effect_size$Tet_dominance_d_upper[
      ds_patches_effect_size$eco_metaeco_type == eco_metaeco_treatment &
      ds_patches_effect_size$time_point == time_point_input &
      ds_patches_effect_size$disturbance == disturbance_input] = 
      hedges_d_Tet_dominance$upper_CI
    
    }
  }
}
#Add baselines (from t1).
baselines <- ds_patches_effect_size %>%
  filter(time_point == 1) %>%
  group_by(eco_metaeco_type) %>%
  summarise(across(paste0(vars, "_d"), 
                   .names = "baseline_{.col}"))

ds_patches_effect_size = full_join(ds_patches_effect_size, baselines)
saveRDS(ds_patches_effect_size, file = here("results", "ds_patches_effect_size.RData"))

Meta-ecosystems (ds_metaecosystems)

  • Each row is a meta-ecosystem.

  • It contains also “fake” meta-ecosystems which I created from isolated patches (metecosystem type = S_L_from_isolated & metecosystem type = M_M_from_isolated).

#Chatgpt

IDs_iso_low_high = ds_patches %>% 
  filter(eco_metaeco_type %in% c("Small isolated", "Large isolated", "Medium isolated"), 
         disturbance %in% c("low", "high")) %>% 
  distinct(culture_ID, eco_metaeco_type, disturbance) %>% 
  select(culture_ID) %>% 
  unique()


### Create a data frame with all the experimental meta-ecosystems and all the possible meta-ecosystems from isolated

culture_ID_of_isolated_S_low = ds_patches %>%
  filter(eco_metaeco_type == "Small isolated",
         disturbance == "low") %>%
  pull(culture_ID) %>%
  unique()

culture_ID_of_isolated_L_low = ds_patches %>%
  filter(eco_metaeco_type == "Large isolated",
         disturbance == "low") %>%
  pull(culture_ID) %>%
  unique()

culture_ID_of_isolated_S_high = ds_patches %>%
  filter(eco_metaeco_type == "Small isolated",
         disturbance == "high") %>%
  pull(culture_ID) %>%
  unique()

culture_ID_of_isolated_L_high = ds_patches %>%
  filter(eco_metaeco_type == "Large isolated",
         disturbance == "high") %>%
  pull(culture_ID) %>%
  unique()

culture_ID_of_isolated_M_low = ds_patches %>%
  filter(eco_metaeco_type == "Medium isolated",
         disturbance == "low") %>%
  pull(culture_ID) %>%
  unique()

culture_ID_of_isolated_M_high = ds_patches %>%
  filter(eco_metaeco_type == "Medium isolated",
         disturbance == "high") %>%
  pull(culture_ID) %>%
  unique()


combinations_S_and_L_low = crossing(culture_ID_of_isolated_S_low,
                                    culture_ID_of_isolated_L_low) %>%
                            mutate(disturbance = "low",
                                   metaecosystem_type = "S_L_from_isolated") %>%
                            rename(ID_first_patch = culture_ID_of_isolated_S_low,
                                   ID_second_patch = culture_ID_of_isolated_L_low)

combinations_S_and_L_high = crossing(culture_ID_of_isolated_S_high,
                                    culture_ID_of_isolated_L_high) %>%
                            mutate(disturbance = "high",
                                   metaecosystem_type = "S_L_from_isolated") %>%
                            rename(ID_first_patch = culture_ID_of_isolated_S_high,
                                   ID_second_patch = culture_ID_of_isolated_L_high)

combinations_M_and_M_low = combn(unique(culture_ID_of_isolated_M_low,
                                        culture_ID_of_isolated_M_low),
                                 m = 2) %>%
                            t() %>%
                            as.data.frame() %>%
                            mutate(disturbance = "low",
                                   metaecosystem_type = "M_M_from_isolated") %>%
                            rename(ID_first_patch = V1,
                                   ID_second_patch = V2)

combinations_M_and_M_high = combn(unique(culture_ID_of_isolated_M_high,
                                         culture_ID_of_isolated_M_high),
                                  m = 2) %>%
                            t() %>%
                            as.data.frame() %>%
                            mutate(disturbance = "high",
                                   metaecosystem_type = "M_M_from_isolated") %>%
                            rename(ID_first_patch = V1,
                                   ID_second_patch = V2)

ID_combinations_isolated = rbind(
  combinations_S_and_L_low,
  combinations_S_and_L_high,
  combinations_M_and_M_low,
  combinations_M_and_M_high
) 

ID_combinations_isolated = ID_combinations_isolated %>%
  mutate(system_nr = 1001:(1000 + nrow(ID_combinations_isolated)))

ID_combinations_metaecos = read.csv(here("data", "ID_combinations_metaecos.csv"))

ID_combinations = rbind(ID_combinations_isolated,
                         ID_combinations_metaecos)

number_of_combinations = nrow(ID_combinations)
#Compute meta-ecosystems  for each time point 

ds_metaecosystems_single_rows = NULL
row_n = 0
for (combination in 1:number_of_combinations){
  for (time_point_input in first_time_point:last_time_point) {
    
    row_n = row_n + 1
    
    current_system_nr = ID_combinations[combination,]$system_nr
    current_IDs = ID_combinations[combination,1:2]
    current_disturbance = ID_combinations[combination,]$disturbance
    current_metaeco_type = ID_combinations[combination,]$metaecosystem_type
    current_day = sampling_days[time_point_input+1]
    
    if (current_system_nr %in% metaecosystems_to_take_off)
      next
    
    
    species_density_of_the_two_patches = ds_patches %>%
      filter(culture_ID %in% current_IDs,
             time_point == time_point_input) %>%
      ungroup() %>%
      select(Ble:Tet)
    
    #Alpha diversity: Shannon (mean between the two patches)
    shannon_patch_1 = diversity(species_density_of_the_two_patches[1,], index = "shannon")
    shannon_patch_2 = diversity(species_density_of_the_two_patches[2,], index = "shannon")
    shannon = (shannon_patch_1 + shannon_patch_2) / 2
    
    #Beta diversity: Jaccard

    jaccard_index = vegdist(species_density_of_the_two_patches, 
                            method = "jaccard") %>%
                    as.numeric()
    
    #Beta diversity: Bray Curtis
    
    bray_curtis = vegdist(species_density_of_the_two_patches, 
                          method = "bray") %>%
                    as.numeric()
    
    #Gamma diversity: Meta-ecosystem richness
    
    presence_absence_two_patches = ifelse(test = species_density_of_the_two_patches > 0,
                                          yes = 1,
                                          no = 0)
    
    summed_columns = colSums(presence_absence_two_patches)
    
    presence_absence_metaecosystem = ifelse(test = summed_columns > 0,
                              yes = 1,
                              no = 0)
    
    metaecosystem_richness = sum(presence_absence_metaecosystem)
    
    #Put everything together
    
    ds_metaecosystems_single_rows[[row_n]] = ds_patches %>%
    filter(culture_ID %in% current_IDs,
           time_point == time_point_input) %>%
    summarise(total_metaecosystem_bioarea = sum(bioarea_tot),
              total_metaecosystem_Ble = sum(Ble_tot),
              total_metaecosystem_Cep = sum(Cep_tot),
              total_metaecosystem_Col = sum(Col_tot),
              total_metaecosystem_Eug = sum(Eug_tot),
              total_metaecosystem_Eup = sum(Eup_tot),
              total_metaecosystem_Lox = sum(Lox_tot),
              total_metaecosystem_Pau = sum(Pau_tot),
              total_metaecosystem_Pca = sum(Pca_tot),
              total_metaecosystem_Spi = sum(Spi_tot),
              total_metaecosystem_Spi_te = sum(Spi_te_tot),
              total_metaecosystem_Tet = sum(Tet_tot)) %>% 
    mutate(system_nr = current_system_nr,
           metaecosystem_type = current_metaeco_type,
           disturbance = current_disturbance,
           time_point = time_point_input,
           day = current_day,
           jaccard_index = jaccard_index,
           bray_curtis = bray_curtis,
           metaecosystem_richness = metaecosystem_richness,
           mean_shannon = shannon) %>%
      ungroup()
    
  }
}
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"):
## missing values in results
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): missing
## values in results
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"):
## missing values in results
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): missing
## values in results
## Warning in vegdist(species_density_of_the_two_patches, method = "jaccard"): you have empty rows: their dissimilarities may be
##                  meaningless in method "jaccard"
## Warning in vegdist(species_density_of_the_two_patches, method = "bray"): you have empty rows: their dissimilarities may be
##                  meaningless in method "bray"
ds_metaecosystems = ds_metaecosystems_single_rows %>%
  bind_rows()
#Add baselines (from t1).
baselines <- ds_metaecosystems %>%
  ungroup() %>%
  filter(time_point == 1) %>%
  group_by(system_nr) %>%
  summarise(across(vars_metaecos, .names = "baseline_{.col}"))
## Warning: Using an external vector in selections was deprecated in tidyselect 1.1.0.
## ℹ Please use `all_of()` or `any_of()` instead.
##   # Was:
##   data %>% select(vars_metaecos)
## 
##   # Now:
##   data %>% select(all_of(vars_metaecos))
## 
## See <https://tidyselect.r-lib.org/reference/faq-external-vector.html>.
ds_metaecosystems = full_join(ds_metaecosystems, baselines)
ds_metaecosystems = as.data.frame(ds_metaecosystems)
#Rename the meta-ecosystems
ds_metaecosystems$metaecosystem_type <- recode(
  ds_metaecosystems$metaecosystem_type,
  "S_S" = "Small-Small meta-ecosystem",
  "M_M" = "Medium-Medium meta-ecosystem",
  "M_M_from_isolated" = "Medium-Medium isolated",
  "L_L" = "Large-Large meta-ecosystem",
  "S_L" = "Small-Large meta-ecosystem",
  "S_L_from_isolated" = "Small-Large isolated"
)
#Reorder the meta-ecosystems
ds_metaecosystems = ds_metaecosystems %>%
  mutate(metaecosystem_type = factor(metaecosystem_type,
                                     levels = c(
                                       "Small-Small meta-ecosystem",
                                       "Medium-Medium meta-ecosystem",
                                       "Medium-Medium isolated",
                                       "Large-Large meta-ecosystem",
                                       "Small-Large meta-ecosystem",
                                       "Small-Large isolated"
                                     )))
#Add whether systems are connected or not 
ds_metaecosystems$connection <-
  ifelse(
    ds_metaecosystems$metaecosystem_type %in% c("Medium-Medium isolated",
                                                "Small-Large isolated"),
    yes = "isolated",
    no = "connected"
  )
#Add whether patch sizes are symmetric or not
ds_metaecosystems$patch_size_symmetry <-
  ifelse(
    ds_metaecosystems$metaecosystem_type %in% c("Small-Large isolated",
                                                "Small-Large meta-ecosystem"),
    yes = "asymmetric",
    no = "symmetric"
  )
saveRDS(ds_metaecosystems, file = here("results", "ds_metaecosystems.RData"))
#ds_metaecosystems = readRDS(file = here("results", "ds_metaecosystems.RData"))

Body size (ds_individuals)

load(here("data", "morphology", "t0.RData"));t0 = morph_mvt; rm(morph_mvt)
load(here("data", "morphology", "t1.RData"));t1 = morph_mvt; rm(morph_mvt)
load(here("data", "morphology", "t2.RData"));t2 = morph_mvt; rm(morph_mvt)
load(here("data", "morphology", "t3.RData"));t3 = morph_mvt; rm(morph_mvt)
load(here("data", "morphology", "t4.RData"));t4 = morph_mvt; rm(morph_mvt)
load(here("data", "morphology", "t5.RData"));t5 = morph_mvt; rm(morph_mvt)
load(here("data", "morphology", "t6.RData"));t6 = morph_mvt; rm(morph_mvt)
load(here("data", "morphology", "t7.RData"));t7 = morph_mvt; rm(morph_mvt)
t0$time = NA
t1$time = NA

t0$replicate_video[t0$file == "sample_00001"] = 1
t0$replicate_video[t0$file == "sample_00002"] = 2
t0$replicate_video[t0$file == "sample_00003"] = 3
t0$replicate_video[t0$file == "sample_00004"] = 4
t0$replicate_video[t0$file == "sample_00005"] = 5
t0$replicate_video[t0$file == "sample_00006"] = 6
t0$replicate_video[t0$file == "sample_00007"] = 7
t0$replicate_video[t0$file == "sample_00008"] = 8
t0$replicate_video[t0$file == "sample_00009"] = 9
t0$replicate_video[t0$file == "sample_00010"] = 10
t0$replicate_video[t0$file == "sample_00011"] = 11
t0$replicate_video[t0$file == "sample_00012"] = 12
t1$replicate_video = 1 #In t1 I took only 1 video/culture
t2$replicate_video = 1 #In t2 I took only 1 video/culture
t3$replicate_video = 1 #In t3 I took only 1 video/culture
t4$replicate_video = 1 #In t4 I took only 1 video/culture
t5$replicate_video = 1 #In t5 I took only 1 video/culture
t6 = t6 %>% rename(replicate_video = replicate)
t7 = t7 %>% rename(replicate_video = replicate)
cultures_n = max(culture_info$culture_ID)
original_t0_rows = nrow(t0)
ID_vector = rep(1:cultures_n, each = original_t0_rows)
t0 = t0 %>%
  slice(rep(1:n(), cultures_n)) %>%
  mutate(culture_ID = ID_vector)

t0 = merge(culture_info, t0, by="culture_ID")
t1 = merge(culture_info, t1, by="culture_ID")
t2 = merge(culture_info, t2, by="culture_ID")
t3 = merge(culture_info, t3, by="culture_ID")
t4 = merge(culture_info, t4, by="culture_ID")
t5 = merge(culture_info, t5, by="culture_ID")
t6 = merge(culture_info, t6, by="culture_ID")
t7 = merge(culture_info, t7, by="culture_ID")
ds_individuals = rbind(t0, t1, t2, t3, t4, t5, t6, t7)
rm(t0, t1, t2, t3, t4, t5, t6, t7)
ds_individuals$day = ds_individuals$time_point;
ds_individuals$day[ds_individuals$day=="t0"] = "0"
ds_individuals$day[ds_individuals$day=="t1"] = "4"
ds_individuals$day[ds_individuals$day=="t2"] = "8"
ds_individuals$day[ds_individuals$day=="t3"] = "12"
ds_individuals$day[ds_individuals$day=="t4"] = "16"
ds_individuals$day[ds_individuals$day=="t5"] = "20"
ds_individuals$day[ds_individuals$day=="t6"] = "24"
ds_individuals$day[ds_individuals$day=="t7"] = "28"
ds_individuals$day = as.numeric(ds_individuals$day)

ds_individuals$time_point[ds_individuals$time_point=="t0"] = 0
ds_individuals$time_point[ds_individuals$time_point=="t1"] = 1
ds_individuals$time_point[ds_individuals$time_point=="t2"] = 2
ds_individuals$time_point[ds_individuals$time_point=="t3"] = 3
ds_individuals$time_point[ds_individuals$time_point=="t4"] = 4
ds_individuals$time_point[ds_individuals$time_point=="t5"] = 5
ds_individuals$time_point[ds_individuals$time_point=="t6"] = 6
ds_individuals$time_point[ds_individuals$time_point=="t7"] = 7
ds_individuals$time_point = as.character(ds_individuals$time_point)

#Select useful columns
ds_individuals = ds_individuals %>% 
  select(culture_ID, 
         patch_size, 
         patch_size_volume,
         disturbance, 
         metaecosystem_type, 
         mean_area, 
         replicate_video, 
         time_point,
         day, 
         metaecosystem, 
         system_nr, 
         eco_metaeco_type)
#Rename the patches

ds_individuals$eco_metaeco_type[ds_individuals$eco_metaeco_type == "S"] = "Small isolated"
ds_individuals$eco_metaeco_type[ds_individuals$eco_metaeco_type == "S (S_S)"] = "Small connected to small"
ds_individuals$eco_metaeco_type[ds_individuals$eco_metaeco_type == "S (S_L)"] = "Small connected to large"
ds_individuals$eco_metaeco_type[ds_individuals$eco_metaeco_type == "M"] = "Medium isolated"
ds_individuals$eco_metaeco_type[ds_individuals$eco_metaeco_type == "M (M_M)"] = "Medium connected to medium"
ds_individuals$eco_metaeco_type[ds_individuals$eco_metaeco_type == "L"] = "Large isolated"
ds_individuals$eco_metaeco_type[ds_individuals$eco_metaeco_type == "L (L_L)"] = "Large connected to large"
ds_individuals$eco_metaeco_type[ds_individuals$eco_metaeco_type == "L (S_L)"] = "Large connected to small"

#Reorder the patches

ds_individuals = ds_individuals %>%
  mutate(eco_metaeco_type = factor(
    eco_metaeco_type,
    levels = c(
      "Small isolated",
      "Small connected to small",
      "Small connected to large",
      "Medium isolated",
      "Medium connected to medium",
      "Large isolated",
      "Large connected to large",
      "Large connected to small"
    )
  ))
saveRDS(ds_individuals, file = here("results", "ds_individuals.RData"))

I’m not showing the whole dataset because it’s too large.

datatable(head(ds_individuals),
          rownames = FALSE,
          options = list(scrollX = TRUE),
          filter = list(position = 'top', 
                        clear = FALSE))

Median body size (ds_median_body_size)

eco_metaeco_types = unique(ds_patches$eco_metaeco_type)

ds_median_body_size = ds_individuals %>%
  group_by(culture_ID, time_point, replicate_video) %>%
  mutate(median_body_size = median(mean_area)) %>%
  ungroup() %>%
  group_by(
    culture_ID,
    patch_size,
    patch_size_volume,
    disturbance,
    metaecosystem_type,
    time_point,
    day,
    metaecosystem,
    eco_metaeco_type
  ) %>%
  summarise(median_body_size = median(mean_area)) %>%
  ungroup()
saveRDS(ds_median_body_size, file = here("results", "ds_median_body_size.RData"))
datatable(ds_median_body_size,
          rownames = FALSE,
          options = list(scrollX = TRUE),
          filter = list(position = 'top', 
                        clear = FALSE))

Median body size effect size (ds_median_body_size_effect_size)

ds_median_body_size_effect_size = ds_median_body_size %>%
  group_by(disturbance, 
           eco_metaeco_type, 
           patch_size, 
           time_point, 
           day) %>%
  summarise(
    sample_size = n(),
    median_body_size_mean = mean(median_body_size),
    median_body_size_sd = sd(median_body_size)
  ) %>% #Initialise columns
  mutate(median_body_size_d = NA,
         median_body_size_d_upper = NA,
         median_body_size_d_lower = NA,
         median_body_size_lnRR = NA)
treatments_and_controls = data.frame(
  treatment = c("Small connected to small", "Small connected to large", "Medium connected to medium", "Large connected to large", "Large connected to small"),
  control = c("Small isolated", "Small isolated", "Medium isolated", "Large isolated", "Large isolated")
)

n_of_treatments = nrow(treatments_and_controls)

row_n = 0
for (disturbance_input in c("low", "high")) {
  for (treatment_n in 1:n_of_treatments) {
    for (time_point_input in 0:7) {
      
      row_n = row_n + 1
      
      eco_metaeco_treatment = treatments_and_controls$treatment[treatment_n]
      
      treatment_row = ds_median_body_size_effect_size %>%
        filter(
          disturbance == disturbance_input,
          eco_metaeco_type == eco_metaeco_treatment,
          time_point == time_point_input
        )
      
      eco_metaeco_control = treatments_and_controls$control[treatment_n]
      
      control_row = ds_median_body_size_effect_size %>%
        filter(
          disturbance == disturbance_input,
          eco_metaeco_type == eco_metaeco_control,
          time_point == time_point_input
        )
      
      hedges_d_median_body_size = calculate.hedges_d(
        treatment_row$median_body_size_mean,
        treatment_row$median_body_size_sd,
        treatment_row$sample_size,
        control_row$median_body_size_mean,
        control_row$median_body_size_sd,
        control_row$sample_size
      )
      
      ds_median_body_size_effect_size$median_body_size_d[
        ds_median_body_size_effect_size$eco_metaeco_type == eco_metaeco_treatment &
        ds_median_body_size_effect_size$time_point == time_point_input &
        ds_median_body_size_effect_size$disturbance == disturbance_input] = 
        hedges_d_median_body_size$d
      
      ds_median_body_size_effect_size$median_body_size_d_upper[
        ds_median_body_size_effect_size$eco_metaeco_type == eco_metaeco_treatment &
        ds_median_body_size_effect_size$time_point == time_point_input &
        ds_median_body_size_effect_size$disturbance == disturbance_input] = 
        hedges_d_median_body_size$upper_CI
      
      ds_median_body_size_effect_size$median_body_size_d_lower[
        ds_median_body_size_effect_size$eco_metaeco_type == eco_metaeco_treatment &
        ds_median_body_size_effect_size$time_point == time_point_input &
        ds_median_body_size_effect_size$disturbance == disturbance_input] = 
        hedges_d_median_body_size$lower_CI
      
      ds_median_body_size_effect_size$median_body_size_lnRR[
        ds_median_body_size_effect_size$eco_metaeco_type == eco_metaeco_treatment &
        ds_median_body_size_effect_size$time_point == time_point_input &
        ds_median_body_size_effect_size$disturbance == disturbance_input] = 
        ln(treatment_row$median_body_size_mean / control_row$median_body_size_mean)
    
    }
  }
}
saveRDS(ds_median_body_size_effect_size, here("results", "ds_median_body_size_effect_size.RData"))
datatable(ds_median_body_size_effect_size,
          rownames = FALSE,
          options = list(scrollX = TRUE),
          filter = list(position = 'top', 
                        clear = FALSE))

Size classes (ds_classes)

  • Each row is the size class of a culture at a certain time point
  • I create 24 size classes (Jacquet, Gounand, and Altermatt (2020) created 12 size classes). The first class lower boundary is the smallest body size, the last size class upper boundary is the largest one.
nr_of_size_classes = 12

smallest_size = min(ds_individuals$mean_area)
largest_size = max(ds_individuals$mean_area)

The logarithm of the largest individual in the experiment was 11.3795139 μm² compared to 11.4 in Jacquet, Gounand, and Altermatt (2020).

size_class_width = (largest_size - smallest_size) / nr_of_size_classes

size_class_boundaries = seq(from = smallest_size,
                            to = largest_size,
                            by = size_class_width)

single_videos_rows = NULL
row = 0
for (class in 1:nr_of_size_classes) {
  bin_lower_limit = size_class_boundaries[class]
  bin_upper_limit = size_class_boundaries[class + 1]
  mean_size = (size_class_boundaries[class] + size_class_boundaries[class + 1]) / 2
  
  
  for (time_point_input in 0:last_time_point) {
    for (culture_ID_input in 1:n_cultures) {
      for (replicate_video_input in 1:nr_videos[time_point_input + 1]) {
        row = row + 1
        
        video_class_abundance = ds_individuals %>%
          filter(
            bin_lower_limit <= mean_area,
            mean_area <= bin_upper_limit,
            time_point == time_point_input,
            culture_ID == culture_ID_input,
            replicate_video == replicate_video_input
          ) %>%
          summarise(size_class_abundance = n()) %>%
          pull(size_class_abundance)
        
        single_videos_rows[[row]] = ds_patches %>%
          filter(
            time_point == time_point_input,
            culture_ID == culture_ID_input
          ) %>%
          mutate(
            replicate_video = replicate_video_input,
            size_class_abundance = video_class_abundance,
            size_class_n = class,
            size_class = mean_size,
            log_size_class = log(size_class),
            log_abundance = log(size_class_abundance + 1)
          )
        
      }
    }
  }
}

#Watch out: it contains 27468 rows instead of 27720 because we excluded already culture_ID = 60, which I spilled during the experiment.

ds_classes = single_videos_rows %>%
  bind_rows()

saveRDS(ds_classes, file = here("results", "ds_classes.RData"))
ds_classes = readRDS(here("results", "ds_classes.RData"))

Size classes effect size (ds_classes)

ds_classes_effect_size = ds_classes %>%
  group_by(
    eco_metaeco_type,
    metaecosystem,
    patch_size,
    disturbance,
    day,
    time_point,
    size_class_n,
    log_size_class
  ) %>%
  summarise(
    log_abundance_sd = sd(log_abundance),
    abundance = mean(size_class_abundance),
    abundance_sd = sd(size_class_abundance),
    log_abundance = mean(log_abundance),
    sample_size = n(),
  ) %>%
  mutate(
    log_abundance_se = log_abundance_sd / sqrt(sample_size),
    log_abundance_lower_ci = log_abundance - qt(1 - (0.05 / 2), sample_size - 1) * log_abundance_se,
    log_abundance_upper_ci = log_abundance + qt(1 - (0.05 / 2), sample_size - 1) * log_abundance_se
  ) %>%
  mutate(abundance_d = NA,
         abundance_d_upper = NA,
         abundance_d_lower = NA)

#Expected number of rows: 12 size classes * 8 eco_metaeco_types * 2 disturbance types * 8 time points = 1536
treatments_and_controls = data.frame(
  treatment = c("Small connected to small", 
                "Small connected to large", 
                "Large connected to large", 
                "Large connected to small"),
  control = c("Small isolated", 
              "Small isolated",
              "Large isolated", 
              "Large isolated")
)

n_of_treatments = nrow(treatments_and_controls)

row_n = 0
for (disturbance_input in c("low", "high")) {
  for (treatment_n in 1:n_of_treatments) {
    for (time_point_input in 0:7) {
      for (size_class_input in 1:nr_of_size_classes) {
        row_n = row_n + 1
        
        eco_metaeco_treatment = treatments_and_controls$treatment[treatment_n]
        
        treatment = ds_classes_effect_size %>%
          filter(
            disturbance == disturbance_input,
            eco_metaeco_type == eco_metaeco_treatment,
            time_point == time_point_input,
            size_class_n == size_class_input
          )
        
        eco_metaeco_control = treatments_and_controls$control[treatment_n]
        
        control = ds_classes_effect_size %>%
          filter(
            disturbance == disturbance_input,
            eco_metaeco_type == eco_metaeco_control,
            time_point == time_point_input,
            size_class_n == size_class_input
          )
        
        #Body size class abundance
        
        hedges_d_size_class = calculate.hedges_d(
          treatment$abundance,
          treatment$abundance_sd,
          treatment$sample_size,
          control$abundance,
          control$abundance_sd,
          control$sample_size
        )
        
        ds_classes_effect_size$abundance_d[
          ds_classes_effect_size$disturbance == disturbance_input &
          ds_classes_effect_size$eco_metaeco_type == eco_metaeco_treatment &
          ds_classes_effect_size$time_point == time_point_input &
          ds_classes_effect_size$size_class_n == size_class_input] = 
          hedges_d_size_class$d
        
        ds_classes_effect_size$abundance_d_upper[
          ds_classes_effect_size$disturbance == disturbance_input &
          ds_classes_effect_size$eco_metaeco_type == eco_metaeco_treatment &
          ds_classes_effect_size$time_point == time_point_input &
          ds_classes_effect_size$size_class_n == size_class_input] =
          hedges_d_size_class$upper_CI
        
        ds_classes_effect_size$abundance_d_lower[
          ds_classes_effect_size$disturbance == disturbance_input &
          ds_classes_effect_size$eco_metaeco_type == eco_metaeco_treatment &
          ds_classes_effect_size$time_point == time_point_input &
          ds_classes_effect_size$size_class_n == size_class_input] =
          hedges_d_size_class$lower_CI
        
      }
    }
  }
}
saveRDS(ds_classes_effect_size, file = here("results", "ds_classes_effect_size.RData"))
datatable(ds_classes_effect_size,
          rownames = FALSE,
          options = list(scrollX = TRUE),
          filter = list(position = 'top', 
                        clear = FALSE))

Filter according to disturbance

ds_patches = ds_patches %>%
  filter(disturbance == disturbance_input)

ds_patches_effect_size = ds_patches_effect_size %>%
  filter(disturbance == disturbance_input)

ds_individuals = ds_individuals %>%
  filter(disturbance == disturbance_input)

ds_classes = ds_classes %>%
  filter(disturbance == disturbance_input)

ds_classes_effect_size = ds_classes_effect_size %>%
  filter(disturbance == disturbance_input)

ds_median_body_size = ds_median_body_size %>%
  filter(disturbance == disturbance_input)

ds_median_body_size_effect_size = ds_median_body_size_effect_size %>%
  filter(disturbance == disturbance_input)

ds_metaecosystems = ds_metaecosystems %>%
  filter(disturbance == disturbance_input)

Meta-ecosystems

All

metaecosystem_type_input = c("Small-Small meta-ecosystem",
                             "Medium-Medium meta-ecosystem",
                             "Medium-Medium isolated",
                             "Large-Large meta-ecosystem",
                             "Small-Large meta-ecosystem",
                             "Small-Large isolated")

Biomass

response_variable = "total_metaecosystem_bioarea"
plot.metaecos.boxplots(metaecosystem_type_input,
                       response_variable)

plot.metaecos.points(metaecosystem_type_input,
                     response_variable)

Alpha

response_variable = "mean_shannon"
plot.metaecos.boxplots(metaecosystem_type_input,
                       response_variable)

plot.metaecos.points(metaecosystem_type_input,
                     response_variable)

Beta diversity

response_variable = "bray_curtis"
plot.metaecos.boxplots(metaecosystem_type_input,
                       response_variable)
## Warning: Removed 2 rows containing non-finite values (`stat_boxplot()`).

plot.metaecos.points(metaecosystem_type_input,
                     response_variable)
## Warning: Removed 1 rows containing non-finite values (`stat_summary()`).
## Warning: Removed 1 rows containing non-finite values (`stat_summary()`).

Gamma diversity

response_variable = "metaecosystem_richness"
plot.metaecos.boxplots(metaecosystem_type_input,
                       response_variable)

plot.metaecos.points(metaecosystem_type_input,
                     response_variable)

MM vs SL (meta-ecosystems and isolated)

metaecosystem_type_input = c("Medium-Medium meta-ecosystem",
                             "Medium-Medium isolated",
                             "Small-Large meta-ecosystem",
                             "Small-Large isolated")

Biomass

response_variable = "total_metaecosystem_bioarea"
plot.metaecos.boxplots(metaecosystem_type_input,
                       response_variable)

p_MM_SL_metaecos_biomass = plot.metaecos.points(metaecosystem_type_input,
                                                response_variable)
p_MM_SL_metaecos_biomass

Analysis

  1. Filter the data. Filter between time point 2 and time point 7 (last time point). We exclude time point 0 and time point 1, as these happened before the first disturbance took place (the first disturbance took place right after time point 1).
filtered_data = ds_metaecosystems %>%
                         filter(time_point >= first_time_point_model,
                                time_point <= last_time_point_model,
                                disturbance == disturbance_input,
                                metaecosystem_type %in% metaecosystem_type_input)
  1. Construct a full model that captures all our hypotheses.

    The first hypothesis proposes that the total bioarea of medium-medium and the small-large meta-ecosystems is different. As such, the total bioarea will depend on the meta-ecosystem type (total_metaecosystem_bioarea ~ metaecosystem_type).

    The second hypothesis proposes that the total bioarea of isolated depends on the connection between patches
    (total_metaecosystem_bioarea ~ connection).

    The third hypothesis proposes that the flow of non-living material has different effects whether the meta-ecosystem is medium-medium or small-large
    (total_metaecosystem_bioarea ~ metaecosystem_type * connection)
full_model = lmer(
  total_metaecosystem_bioarea ~
    day +
    metaecosystem_type +
    metaecosystem_type : day +
    connection +
    connection : day + 
    metaecosystem_type : connection +
    metaecosystem_type : connection : day + 
    (day | system_nr),
  data = filtered_data,
  REML = FALSE,
  control = lmerControl(optimizer = "Nelder_Mead")
)

Check the output of the model.

summary(full_model)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: 
## total_metaecosystem_bioarea ~ day + metaecosystem_type + metaecosystem_type:day +  
##     connection + connection:day + metaecosystem_type:connection +  
##     metaecosystem_type:connection:day + (day | system_nr)
##    Data: filtered_data
## Control: lmerControl(optimizer = "Nelder_Mead")
## 
##      AIC      BIC   logLik deviance df.resid 
##   2338.8   2380.5  -1157.4   2314.8      228 
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -2.07196 -0.78197 -0.01093  0.59937  2.89467 
## 
## Random effects:
##  Groups    Name        Variance Std.Dev. Corr 
##  system_nr (Intercept) 697.777  26.415        
##            day           1.604   1.267   -1.00
##  Residual              833.099  28.863        
## Number of obs: 240, groups:  system_nr, 40
## 
## Fixed effects:
##                                                  Estimate Std. Error t value
## (Intercept)                                       255.425     18.977  13.460
## day                                                -8.587      0.957  -8.972
## metaecosystem_typeMedium-Medium isolated          -61.663     23.242  -2.653
## metaecosystem_typeSmall-Large meta-ecosystem      -54.097     26.838  -2.016
## metaecosystem_typeSmall-Large isolated            -19.296     21.217  -0.909
## day:metaecosystem_typeMedium-Medium isolated        2.417      1.172   2.062
## day:metaecosystem_typeSmall-Large meta-ecosystem    2.165      1.353   1.599
## day:metaecosystem_typeSmall-Large isolated          2.033      1.070   1.900
## 
## Correlation of Fixed Effects:
##             (Intr) day    m_M-Mi m_S-Lm m_S-Li d:_M-i d:_S-m
## day         -0.958                                          
## mtcsys_M-Mi -0.816  0.782                                   
## mtcsy_S-Lm- -0.707  0.678  0.577                            
## mtcsys_S-Li -0.894  0.857  0.730  0.632                     
## dy:mtc_M-Mi  0.782 -0.816 -0.958 -0.553 -0.700              
## dy:mt_S-Lm-  0.678 -0.707 -0.553 -0.958 -0.606  0.577       
## dy:mtc_S-Li  0.857 -0.894 -0.700 -0.606 -0.958  0.730  0.632
## fit warnings:
## fixed-effect model matrix is rank deficient so dropping 8 columns / coefficients
## optimizer (Nelder_Mead) convergence code: 0 (OK)
## boundary (singular) fit: see help('isSingular')

Check that the residuals of the model are okay (model diagnostics)

plot(full_model)

qqnorm(resid(full_model))

  1. Test the full model.

    Construct the null model for all hypotheses.

null_model = lmer(
  total_metaecosystem_bioarea ~
    day +
    (day | system_nr),
  data = filtered_data,
  REML = FALSE,
  control = lmerControl(optimizer = "Nelder_Mead")
)

Get the p value and the delta AIC of the full model compared to the null model.

AIC = AIC(full_model, null_model) %>% 
  rownames_to_column("model")

AIC_full = AIC %>%
           filter(model == "full_model") %>%
           pull(AIC)

AIC_null = AIC %>%
           filter(model == "null_model") %>%
           pull(AIC)

deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Data: filtered_data
## Models:
## null_model: total_metaecosystem_bioarea ~ day + (day | system_nr)
## full_model: total_metaecosystem_bioarea ~ day + metaecosystem_type + metaecosystem_type:day + connection + connection:day + metaecosystem_type:connection + metaecosystem_type:connection:day + (day | system_nr)
##            npar    AIC    BIC  logLik deviance  Chisq Df Pr(>Chisq)    
## null_model    6 2378.6 2399.4 -1183.3   2366.6                         
## full_model   12 2338.8 2380.5 -1157.4   2314.8 51.812  6  2.034e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>Chisq)`[2]
if (p_value < 0.001){
  p_value = "< 0.001"
} else {
  p_value = round(p_value, digits = 3)
}
print(paste0("P value = ", p_value, ", ΔAIC = ", deltaAIC))
## [1] "P value = < 0.001, ΔAIC = -39.81"
  1. Test hypothesis 1

    Construct the null model.

null_model = lmer(
  total_metaecosystem_bioarea ~
    day +
    connection +
    connection : day + 
    metaecosystem_type : connection +
    metaecosystem_type : connection : day + 
    (day | system_nr),
  data = filtered_data,
  REML = FALSE,
  control = lmerControl(optimizer = "Nelder_Mead")
)

Get the p value and the delta AIC of the full model compared to the null model.

AIC = AIC(full_model, null_model) %>% 
  rownames_to_column("model")

AIC_full = AIC %>%
           filter(model == "full_model") %>%
           pull(AIC)

AIC_null = AIC %>%
           filter(model == "null_model") %>%
           pull(AIC)

deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Data: filtered_data
## Models:
## full_model: total_metaecosystem_bioarea ~ day + metaecosystem_type + metaecosystem_type:day + connection + connection:day + metaecosystem_type:connection + metaecosystem_type:connection:day + (day | system_nr)
## null_model: total_metaecosystem_bioarea ~ day + connection + connection:day + metaecosystem_type:connection + metaecosystem_type:connection:day + (day | system_nr)
##            npar    AIC    BIC  logLik deviance Chisq Df Pr(>Chisq)
## full_model   12 2338.8 2380.5 -1157.4   2314.8                    
## null_model   12 2338.8 2380.5 -1157.4   2314.8     0  0
# p_value = anova$`Pr(>Chisq)`[2]
# if (p_value < 0.001){
#   p_value = "< 0.001"
# } else {
#   p_value = round(p_value, digits = 3)
# }
# print(paste0("P value = ", p_value, ", ΔAIC = ", deltaAIC))

The full model and the null model result to be the same. So, it seems like taking off metaecosystem_type and metaecosystem_type : day doesn’t have an effect, I’m not sure why this is the case.

Alpha diversity

response_variable = "mean_shannon"
plot.metaecos.boxplots(metaecosystem_type_input,
                       response_variable)

p_MM_SL_metaecos_alpha = plot.metaecos.points(metaecosystem_type_input,
                                              response_variable)
p_MM_SL_metaecos_alpha

Beta diversity

response_variable = "bray_curtis"
plot.metaecos.boxplots(metaecosystem_type_input,
                       response_variable)

p_MM_SL_metaecos_beta = plot.metaecos.points(metaecosystem_type_input,
                                             response_variable)
p_MM_SL_metaecos_beta

Gamma diversity

response_variable = "metaecosystem_richness"
plot.metaecos.boxplots(metaecosystem_type_input,
                       response_variable)

p_MM_SL_metaecos_gamma = plot.metaecos.points(metaecosystem_type_input,
                                              response_variable)
p_MM_SL_metaecos_gamma

Patches

Isolated

eco_metaeco_type_input = c("Small isolated",
                           "Medium isolated",
                           "Large isolated")

Biomass

response_variable = "bioarea_per_volume"
plot.patches.boxplots(eco_metaeco_type_input,
                       response_variable)

p_isolated_biomass = plot.patches.points(eco_metaeco_type_input,
                       response_variable)

p_isolated_biomass

Analysis

  1. Filter the data. Filter between time point 2 and time point 7 (last time point). We exclude time point 0 and time point 1, as these happened before the first disturbance took place (the first disturbance took place right after time point 1).
filtered_data = ds_patches %>%
  filter(
    disturbance == disturbance_input,
    metaecosystem == "no",
    time_point >= first_time_point_model,
    time_point <= last_time_point_model
  )
  1. Construct a full model that captures all our hypotheses.

    The first hypothesis proposes that the bioarea density of isolated patches depends on their patch size (patch_size_volume).

    The second hypothesis proposes that the bioarea density of isolated patches goes down faster in smaller patches (patch_size_volume : day).
full_model = lmer(
  bioarea_per_volume ~
    day +
    patch_size_volume +
    patch_size_volume : day +
    baseline_bioarea_per_volume + 
    baseline_bioarea_per_volume : day +
    (day | culture_ID),
  data = filtered_data,
  REML = FALSE,
  control = lmerControl(optimizer = "Nelder_Mead")
)

Check the output of the model.

summary(full_model)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: 
## bioarea_per_volume ~ day + patch_size_volume + patch_size_volume:day +  
##     baseline_bioarea_per_volume + baseline_bioarea_per_volume:day +  
##     (day | culture_ID)
##    Data: filtered_data
## Control: lmerControl(optimizer = "Nelder_Mead")
## 
##      AIC      BIC   logLik deviance df.resid 
##    199.6    223.9    -89.8    179.6       74 
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -3.07105 -0.65105 -0.05097  0.52388  2.46384 
## 
## Random effects:
##  Groups     Name        Variance  Std.Dev.  Corr
##  culture_ID (Intercept) 0.000e+00 0.000e+00     
##             day         2.252e-16 1.501e-08  NaN
##  Residual               4.967e-01 7.047e-01     
## Number of obs: 84, groups:  culture_ID, 14
## 
## Fixed effects:
##                                  Estimate Std. Error t value
## (Intercept)                      2.303723   0.875336   2.632
## day                             -0.136101   0.045466  -2.993
## patch_size_volume                0.157960   0.021376   7.390
## baseline_bioarea_per_volume     -0.525129   0.283430  -1.853
## day:patch_size_volume           -0.003929   0.001110  -3.538
## day:baseline_bioarea_per_volume  0.028661   0.014722   1.947
## 
## Correlation of Fixed Effects:
##             (Intr) day    ptch__ bsl___ dy:p__
## day         -0.935                            
## ptch_sz_vlm  0.034 -0.032                     
## bsln_br_pr_ -0.837  0.783 -0.533              
## dy:ptch_sz_ -0.032  0.034 -0.935  0.498       
## dy:bsln_b__  0.783 -0.837  0.498 -0.935 -0.533
## optimizer (Nelder_Mead) convergence code: 0 (OK)
## boundary (singular) fit: see help('isSingular')

Check that the residuals of the model are okay (model diagnostics).

plot(full_model)

qqnorm(resid(full_model))

  1. Test the full model.

Construct the null model for all hypotheses.

null_model = lmer(
  bioarea_per_volume ~
    day +
    baseline_bioarea_per_volume + 
    baseline_bioarea_per_volume : day +
    (day | culture_ID),
  data = filtered_data,
  REML = FALSE,
  control = lmerControl(optimizer = "Nelder_Mead")
)

Get the p value and the delta AIC of the full model compared to the null model.

AIC = AIC(full_model, null_model) %>% 
  rownames_to_column("model")

AIC_full = AIC %>%
           filter(model == "full_model") %>%
           pull(AIC)

AIC_null = AIC %>%
           filter(model == "null_model") %>%
           pull(AIC)

deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Data: filtered_data
## Models:
## null_model: bioarea_per_volume ~ day + baseline_bioarea_per_volume + baseline_bioarea_per_volume:day + (day | culture_ID)
## full_model: bioarea_per_volume ~ day + patch_size_volume + patch_size_volume:day + baseline_bioarea_per_volume + baseline_bioarea_per_volume:day + (day | culture_ID)
##            npar    AIC    BIC   logLik deviance  Chisq Df Pr(>Chisq)    
## null_model    8 232.87 252.31 -108.433   216.87                         
## full_model   10 199.59 223.90  -89.798   179.59 37.271  2  8.067e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>Chisq)`[2]
if(p_value < 0.001){
  p_value = "< 0.001"
} else {
  p_value = round(p_value, digits = 3)
}
print(paste0("P value = ", p_value, ", ΔAIC = ", deltaAIC))
## [1] "P value = < 0.001, ΔAIC = -33.27"
  1. Test hypothesis 1

Construct the null model.

null_model = lmer(
  bioarea_per_volume ~
    day +
    patch_size_volume : day +
    baseline_bioarea_per_volume + 
    baseline_bioarea_per_volume : day +
    (day | culture_ID),
  data = filtered_data,
  REML = FALSE,
  control = lmerControl(optimizer = "Nelder_Mead")
)

Get the p value and the delta AIC of the full model compared to the null model.

AIC = AIC(full_model, null_model) %>% 
  rownames_to_column("model")

AIC_full = AIC %>%
           filter(model == "full_model") %>%
           pull(AIC)

AIC_null = AIC %>%
           filter(model == "null_model") %>%
           pull(AIC)

deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Data: filtered_data
## Models:
## null_model: bioarea_per_volume ~ day + patch_size_volume:day + baseline_bioarea_per_volume + baseline_bioarea_per_volume:day + (day | culture_ID)
## full_model: bioarea_per_volume ~ day + patch_size_volume + patch_size_volume:day + baseline_bioarea_per_volume + baseline_bioarea_per_volume:day + (day | culture_ID)
##            npar    AIC    BIC   logLik deviance  Chisq Df Pr(>Chisq)    
## null_model    9 221.42 243.29 -101.709   203.42                         
## full_model   10 199.59 223.90  -89.798   179.59 23.822  1  1.057e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>Chisq)`[2]
if (p_value < 0.001){
   p_value = "< 0.001"
 } else {
   p_value = round(p_value, digits = 3)
 }
print(paste0("P value = ", p_value, ", ΔAIC = ", deltaAIC))
## [1] "P value = < 0.001, ΔAIC = -21.82"
  1. Test hypothesis 2

Construct the null model.

null_model = lmer(
  bioarea_per_volume ~
    day +
    patch_size_volume +
    baseline_bioarea_per_volume + 
    baseline_bioarea_per_volume : day +
    (day | culture_ID),
  data = filtered_data,
  REML = FALSE,
  control = lmerControl(optimizer = "Nelder_Mead")
)

Get the p value and the delta AIC of the full model compared to the null model.

AIC = AIC(full_model, null_model) %>% 
  rownames_to_column("model")

AIC_full = AIC %>%
           filter(model == "full_model") %>%
           pull(AIC)

AIC_null = AIC %>%
           filter(model == "null_model") %>%
           pull(AIC)

deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Data: filtered_data
## Models:
## null_model: bioarea_per_volume ~ day + patch_size_volume + baseline_bioarea_per_volume + baseline_bioarea_per_volume:day + (day | culture_ID)
## full_model: bioarea_per_volume ~ day + patch_size_volume + patch_size_volume:day + baseline_bioarea_per_volume + baseline_bioarea_per_volume:day + (day | culture_ID)
##            npar    AIC   BIC  logLik deviance  Chisq Df Pr(>Chisq)   
## null_model    9 207.53 229.4 -94.763   189.53                        
## full_model   10 199.59 223.9 -89.798   179.59 9.9305  1   0.001626 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>Chisq)`[2]
if (p_value < 0.001){
   p_value = "< 0.001"
 } else {
   p_value = round(p_value, digits = 3)
 }
print(paste0("P value = ", p_value, ", ΔAIC = ", deltaAIC))
## [1] "P value = 0.002, ΔAIC = -7.93"

Shannon index

response_variable = "shannon"
plot.patches.boxplots(eco_metaeco_type_input,
                       response_variable)

p_isolated_alpha = plot.patches.points(eco_metaeco_type_input,
                       response_variable)

p_isolated_alpha

Analysis

  1. Filter the data. Filter between time point 2 and time point 7 (last time point). We exclude time point 0 and time point 1, as these happened before the first disturbance took place (the first disturbance took place right after time point 1).
filtered_data = ds_patches %>%
  filter(
    disturbance == disturbance_input,
    metaecosystem == "no",
    time_point >= first_time_point_model,
    time_point <= last_time_point_model
  )
  1. Construct a full model that captures all our hypotheses.

    The first hypothesis proposes that the Shannon index of isolated patches depends on their patch size (patch_size_volume).

    The second hypothesis proposes that the Shannon index of isolated patches goes down faster in smaller patches (patch_size_volume : day).
full_model = lmer(
  shannon ~
    day +
    patch_size_volume +
    patch_size_volume : day +
    baseline_shannon + 
    baseline_shannon : day +
    (day | culture_ID),
  data = filtered_data,
  REML = FALSE,
  control = lmerControl(optimizer = "Nelder_Mead")
)
## Warning: Some predictor variables are on very different scales: consider
## rescaling

Check the output of the model.

summary(full_model)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: 
## shannon ~ day + patch_size_volume + patch_size_volume:day + baseline_shannon +  
##     baseline_shannon:day + (day | culture_ID)
##    Data: filtered_data
## Control: lmerControl(optimizer = "Nelder_Mead")
## 
##      AIC      BIC   logLik deviance df.resid 
##     71.7     96.0    -25.9     51.7       74 
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -2.36044 -0.67292 -0.08549  0.69148  2.22891 
## 
## Random effects:
##  Groups     Name        Variance  Std.Dev. Corr 
##  culture_ID (Intercept) 0.0457330 0.21385       
##             day         0.0002879 0.01697  -1.00
##  Residual               0.0936980 0.30610       
## Number of obs: 84, groups:  culture_ID, 14
## 
## Fixed effects:
##                         Estimate Std. Error t value
## (Intercept)            2.0593591  0.8937942   2.304
## day                   -0.0940213  0.0541253  -1.737
## patch_size_volume     -0.0046945  0.0105108  -0.447
## baseline_shannon      -0.2554085  0.5647449  -0.452
## day:patch_size_volume  0.0019005  0.0006365   2.986
## day:baseline_shannon   0.0169909  0.0341992   0.497
## 
## Correlation of Fixed Effects:
##             (Intr) day    ptch__ bsln_s dy:p__
## day         -0.939                            
## ptch_sz_vlm  0.255 -0.239                     
## basln_shnnn -0.962  0.903 -0.485              
## dy:ptch_sz_ -0.239  0.255 -0.939  0.455       
## dy:bsln_shn  0.903 -0.962  0.455 -0.939 -0.485
## fit warnings:
## Some predictor variables are on very different scales: consider rescaling
## optimizer (Nelder_Mead) convergence code: 0 (OK)
## boundary (singular) fit: see help('isSingular')

Check that the residuals of the model are okay (model diagnostics).

plot(full_model)

qqnorm(resid(full_model))

  1. Test the full model.

Construct the null model for all hypotheses.

null_model = lmer(
  shannon ~
    day +
    baseline_shannon + 
    baseline_shannon : day +
    (day | culture_ID),
  data = filtered_data,
  REML = FALSE,
  control = lmerControl(optimizer = "Nelder_Mead")
)

Get the p value and the delta AIC of the full model compared to the null model.

AIC = AIC(full_model, null_model) %>% 
  rownames_to_column("model")

AIC_full = AIC %>%
           filter(model == "full_model") %>%
           pull(AIC)

AIC_null = AIC %>%
           filter(model == "null_model") %>%
           pull(AIC)

deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Data: filtered_data
## Models:
## null_model: shannon ~ day + baseline_shannon + baseline_shannon:day + (day | culture_ID)
## full_model: shannon ~ day + patch_size_volume + patch_size_volume:day + baseline_shannon + baseline_shannon:day + (day | culture_ID)
##            npar    AIC     BIC  logLik deviance  Chisq Df Pr(>Chisq)    
## null_model    8 93.286 112.732 -38.643   77.286                         
## full_model   10 71.739  96.047 -25.870   51.739 25.547  2  2.836e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>Chisq)`[2]
if(p_value < 0.001){
  p_value = "< 0.001"
} else {
  p_value = round(p_value, digits = 3)
}
print(paste0("P value = ", p_value, ", ΔAIC = ", deltaAIC))
## [1] "P value = < 0.001, ΔAIC = -21.55"
  1. Test hypothesis 1

Construct the null model.

null_model = lmer(
  shannon ~
    day +
    patch_size_volume : day +
    baseline_shannon + 
    baseline_shannon : day +
    (day | culture_ID),
  data = filtered_data,
  REML = FALSE,
  control = lmerControl(optimizer = "Nelder_Mead")
)
## Warning: Some predictor variables are on very different scales: consider
## rescaling

Get the p value and the delta AIC of the full model compared to the null model.

AIC = AIC(full_model, null_model) %>% 
  rownames_to_column("model")

AIC_full = AIC %>%
           filter(model == "full_model") %>%
           pull(AIC)

AIC_null = AIC %>%
           filter(model == "null_model") %>%
           pull(AIC)

deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Data: filtered_data
## Models:
## null_model: shannon ~ day + patch_size_volume:day + baseline_shannon + baseline_shannon:day + (day | culture_ID)
## full_model: shannon ~ day + patch_size_volume + patch_size_volume:day + baseline_shannon + baseline_shannon:day + (day | culture_ID)
##            npar    AIC    BIC  logLik deviance  Chisq Df Pr(>Chisq)
## null_model    9 69.938 91.815 -25.969   51.938                     
## full_model   10 71.739 96.047 -25.869   51.739 0.1986  1     0.6559
p_value = anova$`Pr(>Chisq)`[2]
if (p_value < 0.001){
   p_value = "< 0.001"
 } else {
   p_value = round(p_value, digits = 3)
 }
print(paste0("P value = ", p_value, ", ΔAIC = ", deltaAIC))
## [1] "P value = 0.656, ΔAIC = 1.8"
  1. Test hypothesis 2

Construct the null model.

null_model = lmer(
  shannon ~
    day +
    patch_size_volume +
    baseline_shannon + 
    baseline_shannon : day +
    (day | culture_ID),
  data = filtered_data,
  REML = FALSE,
  control = lmerControl(optimizer = "Nelder_Mead")
)

Get the p value and the delta AIC of the full model compared to the null model.

AIC = AIC(full_model, null_model) %>% 
  rownames_to_column("model")

AIC_full = AIC %>%
           filter(model == "full_model") %>%
           pull(AIC)

AIC_null = AIC %>%
           filter(model == "null_model") %>%
           pull(AIC)

deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Data: filtered_data
## Models:
## null_model: shannon ~ day + patch_size_volume + baseline_shannon + baseline_shannon:day + (day | culture_ID)
## full_model: shannon ~ day + patch_size_volume + patch_size_volume:day + baseline_shannon + baseline_shannon:day + (day | culture_ID)
##            npar    AIC    BIC  logLik deviance  Chisq Df Pr(>Chisq)   
## null_model    9 76.741 98.619 -29.371   58.741                        
## full_model   10 71.739 96.047 -25.869   51.739 7.0022  1   0.008141 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>Chisq)`[2]
if (p_value < 0.001){
   p_value = "< 0.001"
 } else {
   p_value = round(p_value, digits = 3)
 }
print(paste0("P value = ", p_value, ", ΔAIC = ", deltaAIC))
## [1] "P value = 0.008, ΔAIC = -5"

Evenness

response_variable = "evenness_pielou"
plot.patches.boxplots(eco_metaeco_type_input,
                       response_variable)
## Warning: Removed 5 rows containing non-finite values (`stat_boxplot()`).

plot.patches.points(eco_metaeco_type_input,
                       response_variable)
## Warning: Removed 3 rows containing non-finite values (`stat_summary()`).
## Warning: Removed 3 rows containing non-finite values (`stat_summary()`).

Species densities

for (species_input in protist_species){
  
  response_variable = species_input
  
  print(plot.patches.boxplots(eco_metaeco_type_input,
                       response_variable))
  
}

Species dominance

for (species_input in protist_species){
  
  response_variable = paste0(species_input, "_dominance")
  
  print(plot.patches.boxplots(eco_metaeco_type_input,
                       response_variable))
  
}
## Warning: Removed 6 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 6 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 6 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 6 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 6 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 6 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 6 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 6 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 6 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 6 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 6 rows containing non-finite values (`stat_boxplot()`).

Species composition

for(time_point_input in first_time_point:last_time_point) {
  
  print(paste("Time point: ", time_point_input))
  
  print(
    plot.patches.species.composition.stacked(eco_metaeco_type_input,
                                             time_point_input)
  )
  
}
## [1] "Time point:  0"

## [1] "Time point:  1"

## [1] "Time point:  2"

## [1] "Time point:  3"

## [1] "Time point:  4"
## Warning: Removed 4 rows containing missing values (`geom_bar()`).

## [1] "Time point:  5"

## [1] "Time point:  6"
## Warning: Removed 1 rows containing missing values (`geom_bar()`).

## [1] "Time point:  7"

Size classes densities

Mean +- 95% CI
for(size_class_input in 1:nr_of_size_classes){
  
  print(plot.patches.classes.points(eco_metaeco_type_input,
                            size_class_input))
  
}

Body size distribution

Median body size

plot.patches.median.body.size.boxplots(eco_metaeco_type_input)

plot.patches.median.body.size.points(eco_metaeco_type_input)
## Warning in qt(conf.interval/2 + 0.5, datac$N - 1): NaNs produced

Small

eco_metaeco_type_input = c("Small isolated",
                           "Small connected to small",
                           "Small connected to large")

Biomass

response_variable = "bioarea_per_volume"
plot.patches.boxplots(eco_metaeco_type_input,
                       response_variable)

plot.patches.points(eco_metaeco_type_input,
                       response_variable)

plot.patches.points.ES(eco_metaeco_type_input,
                       paste0(response_variable, "_d"))
## Warning: Removed 14 rows containing missing values (`geom_point()`).
## Warning: Removed 14 rows containing missing values (`geom_line()`).

Analysis

filtered_data = ds_patches_effect_size %>%
                         filter(time_point >= first_time_point_model,
                                time_point <= last_time_point_model,
                                disturbance == disturbance_input,
                                eco_metaeco_type %in% c("Small connected to small",
                                                        "Small connected to large"))
full_model = lm(bioarea_per_volume_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day +
                  baseline_bioarea_per_volume_d + 
                  baseline_bioarea_per_volume_d : day,
                  data = filtered_data)

par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(bioarea_per_volume_d ~                  
                  baseline_bioarea_per_volume_d + 
                  baseline_bioarea_per_volume_d : day,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df      AIC
## full_model  7 34.69118
## null_model  4 44.69364
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: bioarea_per_volume_d ~ eco_metaeco_type + eco_metaeco_type:day + 
##     baseline_bioarea_per_volume_d + baseline_bioarea_per_volume_d:day
## Model 2: bioarea_per_volume_d ~ baseline_bioarea_per_volume_d + baseline_bioarea_per_volume_d:day
##   Res.Df    RSS Df Sum of Sq      F   Pr(>F)   
## 1     18 3.3278                                
## 2     21 6.4823 -3   -3.1545 5.6876 0.006395 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

Shannon index

response_variable = "shannon"
plot.patches.boxplots(eco_metaeco_type_input,
                       response_variable)

plot.patches.points(eco_metaeco_type_input,
                       response_variable)

plot.patches.points.ES(eco_metaeco_type_input,
                       paste0(response_variable, "_d"))
## Warning: Removed 14 rows containing missing values (`geom_point()`).
## Warning: Removed 14 rows containing missing values (`geom_line()`).

Analysis

filtered_data = ds_patches_effect_size %>%
                         filter(time_point >= first_time_point_model,
                                time_point <= last_time_point_model,
                                disturbance == disturbance_input,
                                eco_metaeco_type %in% c("Small connected to small", "Small connected to large"))
full_model = lm(shannon_d ~                  
                  eco_metaeco_type + 
                  eco_metaeco_type : day,
                  data = filtered_data)

par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(shannon_d ~                  
                  1,
                  data = filtered_data)

anova = anova(full_model, null_model)
AIC = AIC(full_model, null_model)

anova
## Analysis of Variance Table
## 
## Model 1: shannon_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: shannon_d ~ 1
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)  
## 1     20 10.411                             
## 2     23 14.361 -3   -3.9503 2.5296 0.0863 .
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
AIC
##            df      AIC
## full_model  5 58.06377
## null_model  2 59.78408
p_value = anova$`Pr(>F)`[2]

AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC =  AIC_full - AIC_null

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P = R2_full - R2_null

p_value = round(p_value, digits = 2)
R2_full = round(R2_full, digits = 2)
R2_P = round(R2_P, digits = 2)
deltaAIC = round(deltaAIC, digits = 2)

print(paste0("ΔAIC = ", deltaAIC, ", p = ", p_value, ", R2 (pach type) = ", R2_P))
## [1] "ΔAIC = -1.72, p = 0.09, R2 (pach type) = 0.28"

Evenness

response_variable = "evenness_pielou"
plot.patches.boxplots(eco_metaeco_type_input,
                       response_variable)
## Warning: Removed 14 rows containing non-finite values (`stat_boxplot()`).

plot.patches.points(eco_metaeco_type_input,
                       response_variable)
## Warning: Removed 8 rows containing non-finite values (`stat_summary()`).
## Warning: Removed 8 rows containing non-finite values (`stat_summary()`).

plot.patches.points.ES(eco_metaeco_type_input,
                       paste0(response_variable, "_d"))
## Warning: Removed 30 rows containing missing values (`geom_point()`).
## Warning: Removed 30 rows containing missing values (`geom_line()`).

Species densities

Boxplots
for (species_input in protist_species){
  
  response_variable = species_input
  
  print(plot.patches.boxplots(eco_metaeco_type_input,
                              response_variable))
  
}

Mean +- 95%
for (species_input in protist_species){
  
  response_variable = species_input
  
  print(plot.patches.points(eco_metaeco_type_input,
                              response_variable))
  
}

Effect size
for (species_input in protist_species){
  
  response_variable = species_input
  
  print(plot.patches.points.ES(eco_metaeco_type_input,
                              paste0(response_variable, "_d")))
  
}
## Warning: Removed 14 rows containing missing values (`geom_point()`).
## Warning: Removed 14 rows containing missing values (`geom_line()`).

## Warning: Removed 14 rows containing missing values (`geom_point()`).
## Removed 14 rows containing missing values (`geom_line()`).

## Warning: Removed 22 rows containing missing values (`geom_point()`).
## Removed 14 rows containing missing values (`geom_line()`).

## Warning: Removed 36 rows containing missing values (`geom_point()`).
## Warning: Removed 36 rows containing missing values (`geom_line()`).

## Warning: Removed 18 rows containing missing values (`geom_point()`).
## Warning: Removed 14 rows containing missing values (`geom_line()`).

## Warning: Removed 30 rows containing missing values (`geom_point()`).
## Warning: Removed 28 rows containing missing values (`geom_line()`).

## Warning: Removed 18 rows containing missing values (`geom_point()`).
## Warning: Removed 18 rows containing missing values (`geom_line()`).

## Warning: Removed 18 rows containing missing values (`geom_point()`).
## Warning: Removed 16 rows containing missing values (`geom_line()`).

## Warning: Removed 16 rows containing missing values (`geom_point()`).
## Removed 16 rows containing missing values (`geom_line()`).

## Warning: Removed 26 rows containing missing values (`geom_point()`).
## Warning: Removed 22 rows containing missing values (`geom_line()`).

## Warning: Removed 40 rows containing missing values (`geom_point()`).
## Warning: Removed 40 rows containing missing values (`geom_line()`).

Analysis
patch_size_input = "S"
Ble
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Ble_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = Ble_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -0.8266 -0.4292 -0.0260  0.4510  0.7098 
## 
## Coefficients:
##                                               Estimate Std. Error t value
## (Intercept)                                   0.321742   0.465522   0.691
## eco_metaeco_typeSmall connected to large     -0.452421   0.658347  -0.687
## eco_metaeco_typeSmall connected to small:day -0.003209   0.024180  -0.133
## eco_metaeco_typeSmall connected to large:day  0.022969   0.024180   0.950
##                                              Pr(>|t|)
## (Intercept)                                     0.497
## eco_metaeco_typeSmall connected to large        0.500
## eco_metaeco_typeSmall connected to small:day    0.896
## eco_metaeco_typeSmall connected to large:day    0.353
## 
## Residual standard error: 0.5722 on 20 degrees of freedom
##   (12 observations deleted due to missingness)
## Multiple R-squared:  0.04427,    Adjusted R-squared:  -0.09909 
## F-statistic: 0.3088 on 3 and 20 DF,  p-value: 0.8187
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Ble_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df      AIC
## full_model  5 46.93589
## null_model  2 42.02266
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: Ble_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: Ble_d ~ 1
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1     20 6.5481                           
## 2     23 6.8514 -3  -0.30333 0.3088 0.8187
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = 4.91, p = 0.819, Adjusted R2 = 0.04"
Cep
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Cep_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = Cep_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.68801 -0.33337 -0.05486  0.17948  1.36224 
## 
## Coefficients:
##                                              Estimate Std. Error t value
## (Intercept)                                   0.06132    0.46328   0.132
## eco_metaeco_typeSmall connected to large      0.04713    0.65518   0.072
## eco_metaeco_typeSmall connected to small:day  0.02238    0.02406   0.930
## eco_metaeco_typeSmall connected to large:day  0.02537    0.02406   1.054
##                                              Pr(>|t|)
## (Intercept)                                     0.896
## eco_metaeco_typeSmall connected to large        0.943
## eco_metaeco_typeSmall connected to small:day    0.363
## eco_metaeco_typeSmall connected to large:day    0.304
## 
## Residual standard error: 0.5694 on 20 degrees of freedom
##   (12 observations deleted due to missingness)
## Multiple R-squared:  0.0977, Adjusted R-squared:  -0.03764 
## F-statistic: 0.7219 on 3 and 20 DF,  p-value: 0.5507
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Cep_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df      AIC
## full_model  5 46.70441
## null_model  2 43.17189
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: Cep_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: Cep_d ~ 1
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1     20 6.4852                           
## 2     23 7.1875 -3  -0.70224 0.7219 0.5507
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = 3.53, p = 0.551, Adjusted R2 = 0.1"
Col
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Col_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = Col_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.51594 -0.20269  0.06636  0.23318  0.38322 
## 
## Coefficients:
##                                              Estimate Std. Error t value
## (Intercept)                                  -0.69892    0.28312  -2.469
## eco_metaeco_typeSmall connected to large      0.95201    0.40039   2.378
## eco_metaeco_typeSmall connected to small:day  0.04274    0.01603   2.667
## eco_metaeco_typeSmall connected to large:day  0.00676    0.01603   0.422
##                                              Pr(>|t|)  
## (Intercept)                                    0.0296 *
## eco_metaeco_typeSmall connected to large       0.0349 *
## eco_metaeco_typeSmall connected to small:day   0.0205 *
## eco_metaeco_typeSmall connected to large:day   0.6807  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.3393 on 12 degrees of freedom
##   (20 observations deleted due to missingness)
## Multiple R-squared:  0.5043, Adjusted R-squared:  0.3804 
## F-statistic:  4.07 on 3 and 12 DF,  p-value: 0.03294
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Col_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df      AIC
## full_model  5 16.21125
## null_model  2 21.44103
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: Col_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: Col_d ~ 1
##   Res.Df    RSS Df Sum of Sq    F  Pr(>F)  
## 1     12 1.3812                            
## 2     15 2.7865 -3   -1.4053 4.07 0.03294 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = -5.23, p = 0.033, Adjusted R2 = 0.5"
Eug
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Eug_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Eug_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova

p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)
print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = -5.23, p = 0.033, Adjusted R2 = 0.5"
Eup
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Eup_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = Eup_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.03376 -0.66471 -0.07713  0.73973  1.08670 
## 
## Coefficients:
##                                              Estimate Std. Error t value
## (Intercept)                                  -0.17992    0.70419  -0.255
## eco_metaeco_typeSmall connected to large      1.36371    0.99588   1.369
## eco_metaeco_typeSmall connected to small:day -0.00592    0.03687  -0.161
## eco_metaeco_typeSmall connected to large:day -0.04992    0.03687  -1.354
##                                              Pr(>|t|)
## (Intercept)                                     0.802
## eco_metaeco_typeSmall connected to large        0.190
## eco_metaeco_typeSmall connected to small:day    0.874
## eco_metaeco_typeSmall connected to large:day    0.195
## 
## Residual standard error: 0.865 on 16 degrees of freedom
##   (16 observations deleted due to missingness)
## Multiple R-squared:  0.2071, Adjusted R-squared:  0.05848 
## F-statistic: 1.393 on 3 and 16 DF,  p-value: 0.281
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Eup_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df      AIC
## full_model  5 56.49234
## null_model  2 55.13451
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: Eup_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: Eup_d ~ 1
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1     16 11.971                           
## 2     19 15.098 -3   -3.1275 1.3934  0.281
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = 1.36, p = 0.281, Adjusted R2 = 0.21"
Lox
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Lox_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = Lox_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##         13         14         15         16         19         20         25 
## -7.023e-02 -7.023e-02  1.053e-01  1.053e-01 -3.511e-02 -3.511e-02  3.469e-18 
##         26 
##  3.469e-18 
## 
## Coefficients: (1 not defined because of singularities)
##                                              Estimate Std. Error t value
## (Intercept)                                   0.02582    0.09680   0.267
## eco_metaeco_typeSmall connected to large      0.20965    0.11324   1.851
## eco_metaeco_typeSmall connected to small:day  0.01756    0.00680   2.582
## eco_metaeco_typeSmall connected to large:day       NA         NA      NA
##                                              Pr(>|t|)  
## (Intercept)                                    0.8003  
## eco_metaeco_typeSmall connected to large       0.1233  
## eco_metaeco_typeSmall connected to small:day   0.0493 *
## eco_metaeco_typeSmall connected to large:day       NA  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.08309 on 5 degrees of freedom
##   (28 observations deleted due to missingness)
## Multiple R-squared:  0.5761, Adjusted R-squared:  0.4066 
## F-statistic: 3.398 on 2 and 5 DF,  p-value: 0.117
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Lox_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df        AIC
## full_model  4 -12.861757
## null_model  2  -9.994919
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: Lox_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: Lox_d ~ 1
##   Res.Df      RSS Df Sum of Sq      F Pr(>F)
## 1      5 0.034522                           
## 2      7 0.081447 -2 -0.046925 3.3982  0.117
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = -2.87, p = 0.117, Adjusted R2 = 0.58"
Pau
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Pau_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = Pau_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.45184 -0.13170  0.02808  0.06780  0.76843 
## 
## Coefficients:
##                                              Estimate Std. Error t value
## (Intercept)                                   1.13357    0.40949   2.768
## eco_metaeco_typeSmall connected to large      0.60956    0.49990   1.219
## eco_metaeco_typeSmall connected to small:day -0.01652    0.02786  -0.593
## eco_metaeco_typeSmall connected to large:day -0.02696    0.01489  -1.811
##                                              Pr(>|t|)  
## (Intercept)                                    0.0137 *
## eco_metaeco_typeSmall connected to large       0.2404  
## eco_metaeco_typeSmall connected to small:day   0.5615  
## eco_metaeco_typeSmall connected to large:day   0.0890 .
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.3524 on 16 degrees of freedom
##   (16 observations deleted due to missingness)
## Multiple R-squared:  0.3473, Adjusted R-squared:  0.2249 
## F-statistic: 2.838 on 3 and 16 DF,  p-value: 0.07107
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Pau_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df      AIC
## full_model  5 20.57888
## null_model  2 23.11092
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: Pau_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: Pau_d ~ 1
##   Res.Df    RSS Df Sum of Sq      F  Pr(>F)  
## 1     16 1.9873                              
## 2     19 3.0447 -3   -1.0574 2.8376 0.07107 .
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = -2.53, p = 0.071, Adjusted R2 = 0.35"
Pca
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Pca_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = Pca_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -0.5505 -0.4200  0.1251  0.3616  0.5112 
## 
## Coefficients:
##                                              Estimate Std. Error t value
## (Intercept)                                  -0.41560    0.34717  -1.197
## eco_metaeco_typeSmall connected to large      0.08442    0.52573   0.161
## eco_metaeco_typeSmall connected to small:day  0.03714    0.01912   1.942
## eco_metaeco_typeSmall connected to large:day  0.06766    0.02326   2.908
##                                              Pr(>|t|)  
## (Intercept)                                    0.2487  
## eco_metaeco_typeSmall connected to large       0.8744  
## eco_metaeco_typeSmall connected to small:day   0.0699 .
## eco_metaeco_typeSmall connected to large:day   0.0103 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4162 on 16 degrees of freedom
##   (16 observations deleted due to missingness)
## Multiple R-squared:  0.5646, Adjusted R-squared:  0.4829 
## F-statistic: 6.915 on 3 and 16 DF,  p-value: 0.003376
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Pca_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df      AIC
## full_model  5 27.22639
## null_model  2 37.85425
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: Pca_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: Pca_d ~ 1
##   Res.Df    RSS Df Sum of Sq      F   Pr(>F)   
## 1     16 2.7709                                
## 2     19 6.3634 -3   -3.5925 6.9148 0.003376 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = -10.63, p = 0.003, Adjusted R2 = 0.56"
Spi
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Spi_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = Spi_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -0.7432 -0.2251  0.1012  0.2672  0.3685 
## 
## Coefficients:
##                                              Estimate Std. Error t value
## (Intercept)                                  -1.23248    0.33456  -3.684
## eco_metaeco_typeSmall connected to large      1.62592    0.44073   3.689
## eco_metaeco_typeSmall connected to small:day  0.09592    0.01971   4.866
## eco_metaeco_typeSmall connected to large:day  0.01342    0.01490   0.901
##                                              Pr(>|t|)    
## (Intercept)                                  0.001698 ** 
## eco_metaeco_typeSmall connected to large     0.001679 ** 
## eco_metaeco_typeSmall connected to small:day 0.000124 ***
## eco_metaeco_typeSmall connected to large:day 0.379649    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.3527 on 18 degrees of freedom
##   (14 observations deleted due to missingness)
## Multiple R-squared:  0.6198, Adjusted R-squared:  0.5564 
## F-statistic: 9.781 on 3 and 18 DF,  p-value: 0.0004743
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Spi_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df      AIC
## full_model  5 22.15872
## null_model  2 37.43375
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: Spi_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: Spi_d ~ 1
##   Res.Df    RSS Df Sum of Sq     F    Pr(>F)    
## 1     18 2.2386                                 
## 2     21 5.8878 -3   -3.6492 9.781 0.0004743 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = -15.28, p = 0, Adjusted R2 = 0.62"
Spi_te
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Spi_te_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = Spi_te_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.18783 -0.07361  0.01015  0.09391  0.14721 
## 
## Coefficients:
##                                               Estimate Std. Error t value
## (Intercept)                                   0.265748   0.157849   1.684
## eco_metaeco_typeSmall connected to large      1.293183   0.276638   4.675
## eco_metaeco_typeSmall connected to small:day  0.006238   0.009134   0.683
## eco_metaeco_typeSmall connected to large:day -0.070436   0.018267  -3.856
##                                              Pr(>|t|)   
## (Intercept)                                   0.13076   
## eco_metaeco_typeSmall connected to large      0.00159 **
## eco_metaeco_typeSmall connected to small:day  0.51395   
## eco_metaeco_typeSmall connected to large:day  0.00484 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.1461 on 8 degrees of freedom
##   (24 observations deleted due to missingness)
## Multiple R-squared:  0.8018, Adjusted R-squared:  0.7275 
## F-statistic: 10.79 on 3 and 8 DF,  p-value: 0.003484
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Spi_te_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df       AIC
## full_model  5 -6.967682
## null_model  2  6.453161
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: Spi_te_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: Spi_te_d ~ 1
##   Res.Df     RSS Df Sum of Sq      F   Pr(>F)   
## 1      8 0.17085                                
## 2     11 0.86197 -3  -0.69111 10.787 0.003484 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = -13.42, p = 0.003, Adjusted R2 = 0.8"
Tet
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Tet_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Tet_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova

p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)
print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = -13.42, p = 0.003, Adjusted R2 = 0.8"

Species dominance

for (species_input in protist_species){
  
  response_variable = paste0(species_input, "_dominance")
  
  print(plot.patches.boxplots(eco_metaeco_type_input,
                       response_variable))
  
}
## Warning: Removed 12 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 12 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 12 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 12 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 12 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 12 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 12 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 12 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 12 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 12 rows containing non-finite values (`stat_boxplot()`).

## Warning: Removed 12 rows containing non-finite values (`stat_boxplot()`).

Species composition

for(time_point_input in first_time_point:last_time_point) {
  
  print(paste("Time point: ", time_point_input))
  
  print(
    plot.patches.species.composition.stacked(eco_metaeco_type_input,
                                             time_point_input)
  )
  
}
## [1] "Time point:  0"

## [1] "Time point:  1"

## [1] "Time point:  2"

## [1] "Time point:  3"

## [1] "Time point:  4"
## Warning: Removed 4 rows containing missing values (`geom_bar()`).

## [1] "Time point:  5"

## [1] "Time point:  6"
## Warning: Removed 1 rows containing missing values (`geom_bar()`).

## [1] "Time point:  7"

Size classes densities

Means & 95% CI
for(size_class_input in 1:nr_of_size_classes){
  
  print(plot.patches.classes.points(eco_metaeco_type_input,
                            size_class_input))
  
}

Effect sizes
for(size_class_input in 1:nr_of_size_classes){
  
  print(plot.patches.classes.points.ES(eco_metaeco_type_input,
                            size_class_input))
  
}
## Warning: Removed 6 rows containing missing values (`geom_point()`).
## Warning: Removed 6 rows containing missing values (`geom_line()`).

## Warning: Removed 9 rows containing missing values (`geom_point()`).
## Warning: Removed 9 rows containing missing values (`geom_line()`).

## Warning: Removed 17 rows containing missing values (`geom_point()`).
## Warning: Removed 17 rows containing missing values (`geom_line()`).

## Warning: Removed 18 rows containing missing values (`geom_point()`).
## Warning: Removed 18 rows containing missing values (`geom_line()`).

## Warning: Removed 18 rows containing missing values (`geom_point()`).
## Removed 18 rows containing missing values (`geom_line()`).

## Warning: Removed 18 rows containing missing values (`geom_point()`).
## Removed 18 rows containing missing values (`geom_line()`).

## Warning: Removed 18 rows containing missing values (`geom_point()`).
## Removed 18 rows containing missing values (`geom_line()`).

## Warning: Removed 18 rows containing missing values (`geom_point()`).
## Removed 18 rows containing missing values (`geom_line()`).

## Warning: Removed 18 rows containing missing values (`geom_point()`).
## Removed 18 rows containing missing values (`geom_line()`).

## Warning: Removed 18 rows containing missing values (`geom_point()`).
## Removed 18 rows containing missing values (`geom_line()`).

## Warning: Removed 18 rows containing missing values (`geom_point()`).
## Removed 18 rows containing missing values (`geom_line()`).

## Warning: Removed 18 rows containing missing values (`geom_point()`).
## Removed 18 rows containing missing values (`geom_line()`).

Body size distribution

Median body size

plot.patches.median.body.size.boxplots(eco_metaeco_type_input)

plot.patches.median.body.size.points(eco_metaeco_type_input)
## Warning in qt(conf.interval/2 + 0.5, datac$N - 1): NaNs produced

plot.patches.median.body.size.points.ES(eco_metaeco_type_input)
## Warning: Removed 9 rows containing missing values (`geom_point()`).
## Warning: Removed 9 rows containing missing values (`geom_line()`).

Medium

eco_metaeco_type_input = c("Medium isolated",
                           "Medium connected to medium")

Biomass

response_variable = "bioarea_per_volume"
plot.patches.boxplots(eco_metaeco_type_input,
                       response_variable)

plot.patches.points(eco_metaeco_type_input,
                       response_variable)

plot.patches.points.ES(eco_metaeco_type_input,
                       paste0(response_variable, "_d"))
## Warning: Removed 14 rows containing missing values (`geom_point()`).
## Warning: Removed 14 rows containing missing values (`geom_line()`).

Large

eco_metaeco_type_input = c("Large isolated",
                           "Large connected to large",
                           "Large connected to small")

Biomass

response_variable = "bioarea_per_volume"
plot.patches.boxplots(eco_metaeco_type_input,
                       response_variable)

plot.patches.points(eco_metaeco_type_input,
                       response_variable)

plot.patches.points.ES(eco_metaeco_type_input,
                       paste0(response_variable, "_d"))
## Warning: Removed 14 rows containing missing values (`geom_point()`).
## Warning: Removed 14 rows containing missing values (`geom_line()`).

Analysis

filtered_data = ds_patches_effect_size %>%
                         filter(time_point >= first_time_point_model,
                                time_point <= last_time_point_model,
                                disturbance == disturbance_input,
                                eco_metaeco_type %in% c("Large connected to large", "Large connected to small"))
full_model = lm(bioarea_per_volume_d ~                  
                  eco_metaeco_type + 
                  eco_metaeco_type : day +
                  baseline_bioarea_per_volume_d + 
                  baseline_bioarea_per_volume_d : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = bioarea_per_volume_d ~ eco_metaeco_type + eco_metaeco_type:day + 
##     baseline_bioarea_per_volume_d + baseline_bioarea_per_volume_d:day, 
##     data = filtered_data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.21472  0.02444  0.11600  0.31913  0.57026 
## 
## Coefficients:
##                                                Estimate Std. Error t value
## (Intercept)                                  -1.863e-01  5.909e-01  -0.315
## eco_metaeco_typeLarge connected to small     -6.917e-01  8.326e-01  -0.831
## baseline_bioarea_per_volume_d                -6.416e-16  8.518e-01   0.000
## eco_metaeco_typeLarge connected to large:day -9.806e-03  3.069e-02  -0.320
## eco_metaeco_typeLarge connected to small:day -1.322e-02  2.742e-02  -0.482
## day:baseline_bioarea_per_volume_d             3.100e-17  4.424e-02   0.000
##                                              Pr(>|t|)
## (Intercept)                                     0.756
## eco_metaeco_typeLarge connected to small        0.417
## baseline_bioarea_per_volume_d                   1.000
## eco_metaeco_typeLarge connected to large:day    0.753
## eco_metaeco_typeLarge connected to small:day    0.635
## day:baseline_bioarea_per_volume_d               1.000
## 
## Residual standard error: 0.6337 on 18 degrees of freedom
## Multiple R-squared:  0.3297, Adjusted R-squared:  0.1435 
## F-statistic: 1.771 on 5 and 18 DF,  p-value: 0.1699
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(bioarea_per_volume_d ~                  
                  baseline_bioarea_per_volume_d + 
                  baseline_bioarea_per_volume_d : day,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df      AIC
## full_model  7 53.31024
## null_model  4 55.06986
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)


anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: bioarea_per_volume_d ~ eco_metaeco_type + eco_metaeco_type:day + 
##     baseline_bioarea_per_volume_d + baseline_bioarea_per_volume_d:day
## Model 2: bioarea_per_volume_d ~ baseline_bioarea_per_volume_d + baseline_bioarea_per_volume_d:day
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1     18 7.2290                           
## 2     21 9.9884 -3   -2.7593 2.2902 0.1129
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = -1.76, p = 0.113, Adjusted R2 = 0.26"

Shannon index

response_variable = "shannon"
plot.patches.boxplots(eco_metaeco_type_input,
                       response_variable)

plot.patches.points(eco_metaeco_type_input,
                       response_variable)

plot.patches.points.ES(eco_metaeco_type_input,
                       paste0(response_variable, "_d"))
## Warning: Removed 14 rows containing missing values (`geom_point()`).
## Warning: Removed 14 rows containing missing values (`geom_line()`).

Analysis

filtered_data = ds_patches_effect_size %>%
                         filter(time_point >= first_time_point_model,
                                time_point <= last_time_point_model,
                                disturbance == disturbance_input,
                                eco_metaeco_type %in% c("Large connected to large", "Large connected to small"))
full_model = lm(shannon_d ~                  
                  eco_metaeco_type + 
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = shannon_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -0.3351 -0.2152 -0.1075  0.1641  0.5627 
## 
## Coefficients:
##                                               Estimate Std. Error t value
## (Intercept)                                  -0.313268   0.273149  -1.147
## eco_metaeco_typeLarge connected to small     -0.577537   0.386292  -1.495
## eco_metaeco_typeLarge connected to large:day  0.005529   0.014188   0.390
## eco_metaeco_typeLarge connected to small:day  0.033125   0.014188   2.335
##                                              Pr(>|t|)  
## (Intercept)                                    0.2650  
## eco_metaeco_typeLarge connected to small       0.1505  
## eco_metaeco_typeLarge connected to large:day   0.7009  
## eco_metaeco_typeLarge connected to small:day   0.0301 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.3357 on 20 degrees of freedom
## Multiple R-squared:  0.2293, Adjusted R-squared:  0.1137 
## F-statistic: 1.984 on 3 and 20 DF,  p-value: 0.1489
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(shannon_d ~                  
                  1,
                  data = filtered_data)

anova = anova(full_model, null_model)
AIC = AIC(full_model, null_model)

anova
## Analysis of Variance Table
## 
## Model 1: shannon_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: shannon_d ~ 1
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1     20 2.2544                           
## 2     23 2.9252 -3  -0.67076 1.9836 0.1489
AIC
##            df      AIC
## full_model  5 21.34520
## null_model  2 21.59635
p_value = anova$`Pr(>F)`[2]

AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC =  AIC_full - AIC_null

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P = R2_full - R2_null

p_value = round(p_value, digits = 2)
R2_full = round(R2_full, digits = 2)
R2_P = round(R2_P, digits = 2)
deltaAIC = round(deltaAIC, digits = 2)

print(paste0("ΔAIC = ", deltaAIC, ", p = ", p_value, ", R2 (pach type) = ", R2_P))
## [1] "ΔAIC = -0.25, p = 0.15, R2 (pach type) = 0.23"

Evenness

response_variable = "evenness_pielou"
plot.patches.boxplots(eco_metaeco_type_input,
                       response_variable)

plot.patches.points(eco_metaeco_type_input,
                       response_variable)

plot.patches.points.ES(eco_metaeco_type_input,
                       paste0(response_variable, "_d"))
## Warning: Removed 14 rows containing missing values (`geom_point()`).
## Warning: Removed 14 rows containing missing values (`geom_line()`).

Species densities

Boxplots
for (species_input in protist_species){
  
  response_variable = species_input
  
  print(plot.patches.boxplots(eco_metaeco_type_input,
                              response_variable))
  
}

Mean +- 95%
for (species_input in protist_species){
  
  response_variable = species_input
  
  print(plot.patches.points(eco_metaeco_type_input,
                              response_variable))
  
}

Effect size
for (species_input in protist_species){
  
  response_variable = species_input
  
  print(plot.patches.points.ES(eco_metaeco_type_input,
                              paste0(response_variable, "_d")))
  
}
## Warning: Removed 14 rows containing missing values (`geom_point()`).
## Warning: Removed 14 rows containing missing values (`geom_line()`).

## Warning: Removed 14 rows containing missing values (`geom_point()`).
## Removed 14 rows containing missing values (`geom_line()`).

## Warning: Removed 14 rows containing missing values (`geom_point()`).
## Removed 14 rows containing missing values (`geom_line()`).

## Warning: Removed 16 rows containing missing values (`geom_point()`).
## Removed 14 rows containing missing values (`geom_line()`).

## Warning: Removed 14 rows containing missing values (`geom_point()`).
## Removed 14 rows containing missing values (`geom_line()`).

## Warning: Removed 14 rows containing missing values (`geom_point()`).
## Removed 14 rows containing missing values (`geom_line()`).

## Warning: Removed 14 rows containing missing values (`geom_point()`).
## Removed 14 rows containing missing values (`geom_line()`).

## Warning: Removed 14 rows containing missing values (`geom_point()`).
## Removed 14 rows containing missing values (`geom_line()`).

## Warning: Removed 14 rows containing missing values (`geom_point()`).
## Removed 14 rows containing missing values (`geom_line()`).

## Warning: Removed 14 rows containing missing values (`geom_point()`).
## Removed 14 rows containing missing values (`geom_line()`).

## Warning: Removed 34 rows containing missing values (`geom_point()`).
## Warning: Removed 32 rows containing missing values (`geom_line()`).

Analysis
patch_size_input = "L"
Ble
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Ble_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = Ble_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.27146 -0.16184  0.02059  0.35745  1.39159 
## 
## Coefficients:
##                                              Estimate Std. Error t value
## (Intercept)                                  -0.54430    0.57857  -0.941
## eco_metaeco_typeLarge connected to small      1.34444    0.81822   1.643
## eco_metaeco_typeLarge connected to large:day  0.02572    0.03005   0.856
## eco_metaeco_typeLarge connected to small:day -0.06588    0.03005  -2.192
##                                              Pr(>|t|)  
## (Intercept)                                    0.3581  
## eco_metaeco_typeLarge connected to small       0.1160  
## eco_metaeco_typeLarge connected to large:day   0.4023  
## eco_metaeco_typeLarge connected to small:day   0.0404 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.7111 on 20 degrees of freedom
##   (12 observations deleted due to missingness)
## Multiple R-squared:  0.2491, Adjusted R-squared:  0.1365 
## F-statistic: 2.212 on 3 and 20 DF,  p-value: 0.1182
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Ble_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df      AIC
## full_model  5 57.37134
## null_model  2 58.24810
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: Ble_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: Ble_d ~ 1
##   Res.Df    RSS Df Sum of Sq     F Pr(>F)
## 1     20 10.115                          
## 2     23 13.471 -3    -3.356 2.212 0.1182
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = -0.88, p = 0.118, Adjusted R2 = 0.25"
Cep
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Cep_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = Cep_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.6509 -0.2137  0.3053  0.4745  0.8031 
## 
## Coefficients:
##                                              Estimate Std. Error t value
## (Intercept)                                   0.25114    0.63033   0.398
## eco_metaeco_typeLarge connected to small     -0.17813    0.89142  -0.200
## eco_metaeco_typeLarge connected to large:day -0.03246    0.03274  -0.991
## eco_metaeco_typeLarge connected to small:day -0.03818    0.03274  -1.166
##                                              Pr(>|t|)
## (Intercept)                                     0.695
## eco_metaeco_typeLarge connected to small        0.844
## eco_metaeco_typeLarge connected to large:day    0.333
## eco_metaeco_typeLarge connected to small:day    0.257
## 
## Residual standard error: 0.7748 on 20 degrees of freedom
##   (12 observations deleted due to missingness)
## Multiple R-squared:  0.1354, Adjusted R-squared:  0.005715 
## F-statistic: 1.044 on 3 and 20 DF,  p-value: 0.3947
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Cep_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df      AIC
## full_model  5 61.48384
## null_model  2 58.97567
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: Cep_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: Cep_d ~ 1
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1     20 12.005                           
## 2     23 13.885 -3   -1.8801 1.0441 0.3947
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = 2.51, p = 0.395, Adjusted R2 = 0.14"
Col
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Col_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = Col_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.67080 -0.22063 -0.07621  0.22060  0.88921 
## 
## Coefficients:
##                                              Estimate Std. Error t value
## (Intercept)                                  -0.54464    0.38431  -1.417
## eco_metaeco_typeLarge connected to small     -0.08271    0.54350  -0.152
## eco_metaeco_typeLarge connected to large:day  0.01297    0.01996   0.650
## eco_metaeco_typeLarge connected to small:day  0.01222    0.01996   0.612
##                                              Pr(>|t|)
## (Intercept)                                     0.172
## eco_metaeco_typeLarge connected to small        0.881
## eco_metaeco_typeLarge connected to large:day    0.523
## eco_metaeco_typeLarge connected to small:day    0.547
## 
## Residual standard error: 0.4724 on 20 degrees of freedom
##   (12 observations deleted due to missingness)
## Multiple R-squared:  0.04968,    Adjusted R-squared:  -0.09287 
## F-statistic: 0.3485 on 3 and 20 DF,  p-value: 0.7906
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Col_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df      AIC
## full_model  5 37.73425
## null_model  2 32.95726
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: Col_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: Col_d ~ 1
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1     20 4.4628                           
## 2     23 4.6961 -3  -0.23331 0.3485 0.7906
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = 4.78, p = 0.791, Adjusted R2 = 0.05"
Eug
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Eug_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Eug_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova

p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)
print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = 4.78, p = 0.791, Adjusted R2 = 0.05"
Eup
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Eup_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = Eup_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -0.7116 -0.3384  0.1020  0.2984  0.4941 
## 
## Coefficients:
##                                               Estimate Std. Error t value
## (Intercept)                                  -0.403150   0.352408  -1.144
## eco_metaeco_typeLarge connected to small     -0.576443   0.498381  -1.157
## eco_metaeco_typeLarge connected to large:day -0.005528   0.018304  -0.302
## eco_metaeco_typeLarge connected to small:day  0.031231   0.018304   1.706
##                                              Pr(>|t|)
## (Intercept)                                     0.266
## eco_metaeco_typeLarge connected to small        0.261
## eco_metaeco_typeLarge connected to large:day    0.766
## eco_metaeco_typeLarge connected to small:day    0.103
## 
## Residual standard error: 0.4332 on 20 degrees of freedom
##   (12 observations deleted due to missingness)
## Multiple R-squared:  0.1392, Adjusted R-squared:  0.01009 
## F-statistic: 1.078 on 3 and 20 DF,  p-value: 0.3809
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Eup_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df      AIC
## full_model  5 33.57423
## null_model  2 31.17196
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: Eup_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: Eup_d ~ 1
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1     20 3.7526                           
## 2     23 4.3594 -3  -0.60688 1.0782 0.3809
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = 2.4, p = 0.381, Adjusted R2 = 0.14"
Lox
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Lox_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = Lox_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.97084 -0.17132  0.01584  0.29173  0.84917 
## 
## Coefficients:
##                                              Estimate Std. Error t value
## (Intercept)                                  -1.50635    0.43450  -3.467
## eco_metaeco_typeLarge connected to small     -0.01324    0.61448  -0.022
## eco_metaeco_typeLarge connected to large:day  0.06749    0.02257   2.991
## eco_metaeco_typeLarge connected to small:day  0.08369    0.02257   3.708
##                                              Pr(>|t|)   
## (Intercept)                                   0.00243 **
## eco_metaeco_typeLarge connected to small      0.98302   
## eco_metaeco_typeLarge connected to large:day  0.00723 **
## eco_metaeco_typeLarge connected to small:day  0.00139 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.5341 on 20 degrees of freedom
##   (12 observations deleted due to missingness)
## Multiple R-squared:  0.5488, Adjusted R-squared:  0.4811 
## F-statistic: 8.109 on 3 and 20 DF,  p-value: 0.0009932
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Lox_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df      AIC
## full_model  5 43.62576
## null_model  2 56.72613
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: Lox_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: Lox_d ~ 1
##   Res.Df     RSS Df Sum of Sq      F    Pr(>F)    
## 1     20  5.7045                                  
## 2     23 12.6429 -3   -6.9385 8.1088 0.0009932 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = -13.1, p = 0.001, Adjusted R2 = 0.55"
Pau
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Pau_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = Pau_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.61476 -0.45015  0.06934  0.48889  1.21317 
## 
## Coefficients:
##                                               Estimate Std. Error t value
## (Intercept)                                  -0.837932   0.680626  -1.231
## eco_metaeco_typeLarge connected to small     -2.930440   0.962550  -3.044
## eco_metaeco_typeLarge connected to large:day  0.008356   0.035352   0.236
## eco_metaeco_typeLarge connected to small:day  0.144717   0.035352   4.094
##                                              Pr(>|t|)    
## (Intercept)                                  0.232560    
## eco_metaeco_typeLarge connected to small     0.006401 ** 
## eco_metaeco_typeLarge connected to large:day 0.815544    
## eco_metaeco_typeLarge connected to small:day 0.000565 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.8366 on 20 degrees of freedom
##   (12 observations deleted due to missingness)
## Multiple R-squared:  0.4839, Adjusted R-squared:  0.4065 
## F-statistic: 6.252 on 3 and 20 DF,  p-value: 0.003609
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Pau_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df     AIC
## full_model  5 65.1689
## null_model  2 75.0457
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: Pau_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: Pau_d ~ 1
##   Res.Df    RSS Df Sum of Sq      F   Pr(>F)   
## 1     20 13.998                                
## 2     23 27.124 -3   -13.126 6.2517 0.003609 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = -9.88, p = 0.004, Adjusted R2 = 0.48"
Pca
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Pca_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = Pca_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.95857 -0.33066 -0.04088  0.22278  0.99506 
## 
## Coefficients:
##                                              Estimate Std. Error t value
## (Intercept)                                   1.00162    0.49957   2.005
## eco_metaeco_typeLarge connected to small      0.13988    0.70649   0.198
## eco_metaeco_typeLarge connected to large:day -0.04165    0.02595  -1.605
## eco_metaeco_typeLarge connected to small:day -0.07858    0.02595  -3.028
##                                              Pr(>|t|)   
## (Intercept)                                   0.05869 . 
## eco_metaeco_typeLarge connected to small      0.84505   
## eco_metaeco_typeLarge connected to large:day  0.12415   
## eco_metaeco_typeLarge connected to small:day  0.00664 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.614 on 20 degrees of freedom
##   (12 observations deleted due to missingness)
## Multiple R-squared:  0.4464, Adjusted R-squared:  0.3634 
## F-statistic: 5.377 on 3 and 20 DF,  p-value: 0.007038
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Pca_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df      AIC
## full_model  5 50.32370
## null_model  2 58.51726
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: Pca_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: Pca_d ~ 1
##   Res.Df     RSS Df Sum of Sq      F   Pr(>F)   
## 1     20  7.5408                                
## 2     23 13.6226 -3   -6.0818 5.3768 0.007038 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = -8.19, p = 0.007, Adjusted R2 = 0.45"
Spi
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Spi_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = Spi_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.12273 -0.26219 -0.01716  0.48809  0.72418 
## 
## Coefficients:
##                                              Estimate Std. Error t value
## (Intercept)                                  -0.42257    0.47896  -0.882
## eco_metaeco_typeLarge connected to small     -0.86659    0.67736  -1.279
## eco_metaeco_typeLarge connected to large:day -0.00927    0.02488  -0.373
## eco_metaeco_typeLarge connected to small:day  0.02443    0.02488   0.982
##                                              Pr(>|t|)
## (Intercept)                                     0.388
## eco_metaeco_typeLarge connected to small        0.215
## eco_metaeco_typeLarge connected to large:day    0.713
## eco_metaeco_typeLarge connected to small:day    0.338
## 
## Residual standard error: 0.5887 on 20 degrees of freedom
##   (12 observations deleted due to missingness)
## Multiple R-squared:  0.1021, Adjusted R-squared:  -0.03264 
## F-statistic: 0.7577 on 3 and 20 DF,  p-value: 0.5309
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Spi_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df      AIC
## full_model  5 48.30212
## null_model  2 44.88564
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: Spi_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: Spi_d ~ 1
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1     20 6.9317                           
## 2     23 7.7195 -3  -0.78781 0.7577 0.5309
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = 3.42, p = 0.531, Adjusted R2 = 0.1"
Spi_te
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Spi_te_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
## 
## Call:
## lm(formula = Spi_te_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.15397 -0.53575 -0.08505  0.36994  1.22778 
## 
## Coefficients:
##                                              Estimate Std. Error t value
## (Intercept)                                   0.64844    0.62100   1.044
## eco_metaeco_typeLarge connected to small     -0.68205    0.87823  -0.777
## eco_metaeco_typeLarge connected to large:day -0.03061    0.03226  -0.949
## eco_metaeco_typeLarge connected to small:day -0.02530    0.03226  -0.784
##                                              Pr(>|t|)
## (Intercept)                                     0.309
## eco_metaeco_typeLarge connected to small        0.446
## eco_metaeco_typeLarge connected to large:day    0.354
## eco_metaeco_typeLarge connected to small:day    0.442
## 
## Residual standard error: 0.7633 on 20 degrees of freedom
##   (12 observations deleted due to missingness)
## Multiple R-squared:  0.2018, Adjusted R-squared:  0.08209 
## F-statistic: 1.686 on 3 and 20 DF,  p-value: 0.2021
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Spi_te_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
##            df      AIC
## full_model  5 60.76854
## null_model  2 60.17863
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: Spi_te_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: Spi_te_d ~ 1
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1     20 11.653                           
## 2     23 14.599 -3   -2.9464 1.6857 0.2021
p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)

print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = 0.59, p = 0.202, Adjusted R2 = 0.2"
Tet
filtered_data = ds_patches_effect_size %>%
  filter(patch_size == patch_size_input,
         disturbance == disturbance_input,
         time_point >= first_time_point_model,
         time_point <= last_time_point_model)
full_model = lm(Tet_d ~                  
                  eco_metaeco_type +
                  eco_metaeco_type : day,
                  data = filtered_data)

summary(full_model)
par(mfrow = c(2,3))
plot(full_model, which = 1:5)

null_model = lm(Tet_d ~                  
                  1,
                  data = filtered_data)

AIC = AIC(full_model, null_model)
AIC
AIC_full = AIC$AIC[1]
AIC_null = AIC$AIC[2]
deltaAIC = AIC_full - AIC_null
deltaAIC = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova

p_value = anova$`Pr(>F)`[2]
p_value = round(p_value, digits = 3)

R2_full = glance(full_model)$r.squared
R2_null = glance(null_model)$r.squared
R2_P =  R2_full - R2_null
R2_P = round(R2_P, digits = 2)
print(paste0("ΔAIC = ", deltaAIC , ", p = ", p_value, ", Adjusted R2 = ", R2_P))
## [1] "ΔAIC = 0.59, p = 0.202, Adjusted R2 = 0.2"

Species dominance

for (species_input in protist_species){
  
  response_variable = paste0(species_input, "_dominance")
  
  print(plot.patches.boxplots(eco_metaeco_type_input,
                       response_variable))
  
}

Species composition

for(time_point_input in first_time_point:last_time_point) {
  
  print(paste("Time point: ", time_point_input))
  
  print(
    plot.patches.species.composition.stacked(eco_metaeco_type_input,
                                             time_point_input)
  )
  
}
## [1] "Time point:  0"

## [1] "Time point:  1"

## [1] "Time point:  2"

## [1] "Time point:  3"

## [1] "Time point:  4"

## [1] "Time point:  5"

## [1] "Time point:  6"

## [1] "Time point:  7"

Size classes densities

Means & 95% CI
for(size_class_input in 1:nr_of_size_classes){
  
  print(plot.patches.classes.points(eco_metaeco_type_input,
                            size_class_input))
  
}

Effect sizes
for(size_class_input in 1:nr_of_size_classes){
  
  print(plot.patches.classes.points.ES(eco_metaeco_type_input,
                            size_class_input))
  
}
## Warning: Removed 6 rows containing missing values (`geom_point()`).
## Warning: Removed 6 rows containing missing values (`geom_line()`).

## Warning: Removed 6 rows containing missing values (`geom_point()`).
## Removed 6 rows containing missing values (`geom_line()`).

## Warning: Removed 8 rows containing missing values (`geom_point()`).
## Warning: Removed 8 rows containing missing values (`geom_line()`).

## Warning: Removed 8 rows containing missing values (`geom_point()`).
## Removed 8 rows containing missing values (`geom_line()`).

## Warning: Removed 13 rows containing missing values (`geom_point()`).
## Warning: Removed 11 rows containing missing values (`geom_line()`).

## Warning: Removed 17 rows containing missing values (`geom_point()`).
## Warning: Removed 17 rows containing missing values (`geom_line()`).

## Warning: Removed 13 rows containing missing values (`geom_point()`).
## Warning: Removed 8 rows containing missing values (`geom_line()`).

## Warning: Removed 18 rows containing missing values (`geom_point()`).
## Warning: Removed 18 rows containing missing values (`geom_line()`).

## Warning: Removed 16 rows containing missing values (`geom_point()`).
## Warning: Removed 16 rows containing missing values (`geom_line()`).

## Warning: Removed 18 rows containing missing values (`geom_point()`).
## Warning: Removed 18 rows containing missing values (`geom_line()`).

## Warning: Removed 18 rows containing missing values (`geom_point()`).
## Removed 18 rows containing missing values (`geom_line()`).

## Warning: Removed 18 rows containing missing values (`geom_point()`).
## Removed 18 rows containing missing values (`geom_line()`).

Body size distribution

Median body size

plot.patches.median.body.size.boxplots(eco_metaeco_type_input)

plot.patches.median.body.size.points(eco_metaeco_type_input)

plot.patches.median.body.size.points.ES(eco_metaeco_type_input)
## Warning: Removed 7 rows containing missing values (`geom_point()`).
## Warning: Removed 7 rows containing missing values (`geom_line()`).

SL

eco_metaeco_type_input = c("Small connected to large",
                           "Large connected to small")

Biomass

response_variable = "bioarea_per_volume"
p_connected_biomass_effect_size = plot.patches.points.ES(eco_metaeco_type_input,
                                                         paste0(response_variable, "_d"))

p_connected_biomass_effect_size

Analysis

  1. Filter the data. Filter between time point 2 and time point 7 (last time point). We exclude time point 0 and time point 1, as these happened before the first disturbance took place (the first disturbance took place right after time point 1).
filtered_data = ds_patches_effect_size %>%
                         filter(time_point >= first_time_point_model,
                                time_point <= last_time_point_model,
                                disturbance == disturbance_input,
                                eco_metaeco_type %in% eco_metaeco_type_input)
  1. Construct a full model that captures our two hypotheses.

    The first hypothesis proposes that small patches connected to large ones will exhibit greater biomass compared to small isolated patches, while large patches connected to small ones will display lower biomass relative to large isolated patches. As such, the effect size of bioarea per volume will depend on the patch type (bioarea_per_volume_d ~ eco_metaeco_type).


    The second hypothesis states that as the experiment progresses, small patches connected to large ones will exhibit a greater increase in biomass compared to isolated patches, while large patches will display a decreased increase in biomass relative to large isolated patches. Therefore, the effect size of bioarea per volume will depend on the interaction between patch type and day
    (bioarea_per_volume_d ~ eco_metaeco_type : day).

full_model = lm(bioarea_per_volume_d ~
                  eco_metaeco_type + 
                  eco_metaeco_type : day,
                data = filtered_data)
  1. Check the output of the model.
summary(full_model)
## 
## Call:
## lm(formula = bioarea_per_volume_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.0794 -0.2019  0.1126  0.3380  0.5628 
## 
## Coefficients:
##                                              Estimate Std. Error t value
## (Intercept)                                   0.81003    0.40944   1.978
## eco_metaeco_typeLarge connected to small     -1.68806    0.57904  -2.915
## eco_metaeco_typeSmall connected to large:day  0.03594    0.02127   1.690
## eco_metaeco_typeLarge connected to small:day -0.01322    0.02127  -0.622
##                                              Pr(>|t|)   
## (Intercept)                                   0.06182 . 
## eco_metaeco_typeLarge connected to small      0.00856 **
## eco_metaeco_typeSmall connected to large:day  0.10657   
## eco_metaeco_typeLarge connected to small:day  0.54118   
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.5033 on 20 degrees of freedom
## Multiple R-squared:  0.8889, Adjusted R-squared:  0.8723 
## F-statistic: 53.36 on 3 and 20 DF,  p-value: 1.002e-09
  1. Check that the residuals of the model are okay (model diagnostics)
par(mfrow = c(2, 3))
plot(full_model, which = 1:5)

  1. Construct the null model for both hypotheses, where there is no dependency of bioarea per volume on either patch type or the interaction between patch type and day.
null_model = lm(bioarea_per_volume_d ~
                  1,
                data = filtered_data)
  1. Get the p value and the delta AIC of the full model compared to the null model.
AIC = AIC(full_model, null_model) %>% 
  rownames_to_column("model")

AIC_full = AIC %>%
           filter(model == "full_model") %>%
           pull(AIC)

AIC_null = AIC %>%
           filter(model == "null_model") %>%
           pull(AIC)

deltaAIC = AIC_full - AIC_null
deltaAIC_connected_biomass = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: bioarea_per_volume_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: bioarea_per_volume_d ~ 1
##   Res.Df    RSS Df Sum of Sq      F    Pr(>F)    
## 1     20  5.065                                  
## 2     23 45.607 -3   -40.542 53.358 1.002e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>F)`[2]
if (p_value < 0.001){
  p_value_connected_biomass = "< 0.001"
} else {
  p_value_connected_biomass = round(p_value, digits = 3)
}
print(paste0("P value = ", p_value_connected_biomass, ", ΔAIC = ", deltaAIC_connected_biomass))
## [1] "P value = < 0.001, ΔAIC = -46.74"
  1. Test the effect of patch type (eco_metaeco_type).

    Construct the null model.

null_model = lm(bioarea_per_volume_d ~
                  eco_metaeco_type : day,
                data = filtered_data)

Get the p value and AIC.

AIC = AIC(full_model, null_model) %>% 
  rownames_to_column("model")

AIC_full = AIC %>%
           filter(model == "full_model") %>%
           pull(AIC)

AIC_null = AIC %>%
           filter(model == "null_model") %>%
           pull(AIC)

deltaAIC = AIC_full - AIC_null
deltaAIC_connected_biomass = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: bioarea_per_volume_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: bioarea_per_volume_d ~ eco_metaeco_type:day
##   Res.Df    RSS Df Sum of Sq     F   Pr(>F)   
## 1     20 5.0654                               
## 2     21 7.2180 -1   -2.1525 8.499 0.008556 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>F)`[2]
if (p_value < 0.001){
  p_value_connected_biomass = "< 0.001"
} else {
  p_value_connected_biomass = round(p_value, digits = 3)
}
print(paste0("P value = ", p_value_connected_biomass, ", ΔAIC = ", deltaAIC_connected_biomass))
## [1] "P value = 0.009, ΔAIC = -6.5"
  1. Test the effect of the interaction between patch type and day (eco_metaeco_type * day).

    Construct the null model.

null_model = lm(bioarea_per_volume_d ~
                  eco_metaeco_type,
                data = filtered_data)

Get the p value and AIC.

AIC = AIC(full_model, null_model) %>% 
  rownames_to_column("model")

AIC_full = AIC %>%
           filter(model == "full_model") %>%
           pull(AIC)

AIC_null = AIC %>%
           filter(model == "null_model") %>%
           pull(AIC)

deltaAIC = AIC_full - AIC_null
deltaAIC_connected_biomass = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: bioarea_per_volume_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: bioarea_per_volume_d ~ eco_metaeco_type
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1     20 5.0654                           
## 2     22 5.8866 -2   -0.8212 1.6212 0.2226
p_value = anova$`Pr(>F)`[2]
if (p_value < 0.001){
  p_value_connected_biomass = "< 0.001"
} else {
  p_value_connected_biomass = round(p_value, digits = 3)
}
print(paste0("P value = ", p_value_connected_biomass, ", ΔAIC = ", deltaAIC_connected_biomass))
## [1] "P value = 0.223, ΔAIC = 0.39"

So, it seems like the biomass of the small connected to large and the large connected to small differs, but it doesn’t differ across time. But:

  1. Can we conclude that the effect size for the large connected to small is negative compared to its isolated counterpart, and the effect size for the small connected to large is positive compared to its isolated counterpart?
  2. It seems like the effect size of the large connected to small is decreasing, meanwhile the small connected to large is increasing. Is testing the interaction between patch type and day enough? Or should I test this for each of the patch types?

Alpha diversity

response_variable = "shannon"
p_connected_alpha_effect_size = plot.patches.points.ES(eco_metaeco_type_input,
                       paste0(response_variable, "_d"))

p_connected_alpha_effect_size

Analysis

  1. Filter the data.

    Filter between time point 2 and time point 7 (last time point). We exclude time point 0 and time point 1, as these happened before the first disturbance took place (the first disturbance took place right after time point 1).
filtered_data = ds_patches_effect_size %>%
                         filter(time_point >= first_time_point_model,
                                time_point <= last_time_point_model,
                                disturbance == disturbance_input,
                                eco_metaeco_type %in% eco_metaeco_type)
  1. Construct a full model that captures our two hypotheses.

    The first hypothesis proposes that small patches connected to large ones will exhibit greater biomass compared to small isolated patches, while large patches connected to small ones will display lower biomass relative to large isolated patches. As such, the effect size of bioarea per volume will depend on the patch type (shannon_d ~ eco_metaeco_type).


    The second hypothesis states that as the experiment progresses, small patches connected to large ones will exhibit a greater increase in biomass compared to isolated patches, while large patches will display a decreased increase in biomass relative to large isolated patches. Therefore, the effect size of bioarea per volume will depend on the interaction between patch type and day
    (shannon_d ~ eco_metaeco_type : day).

full_model = lm(shannon_d ~
                  eco_metaeco_type + 
                  eco_metaeco_type : day,
                data = filtered_data)
  1. Check the output of the model.
summary(full_model)
## 
## Call:
## lm(formula = shannon_d ~ eco_metaeco_type + eco_metaeco_type:day, 
##     data = filtered_data)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.0857 -0.3351 -0.0879  0.4387  0.9786 
## 
## Coefficients:
##                                                 Estimate Std. Error t value
## (Intercept)                                     0.337940   0.452750   0.746
## eco_metaeco_typeSmall connected to large        1.501452   0.640285   2.345
## eco_metaeco_typeMedium connected to medium      0.310432   0.640285   0.485
## eco_metaeco_typeLarge connected to large       -0.651207   0.640285  -1.017
## eco_metaeco_typeLarge connected to small       -1.228744   0.640285  -1.919
## eco_metaeco_typeSmall connected to small:day    0.017723   0.023516   0.754
## eco_metaeco_typeSmall connected to large:day   -0.023466   0.023516  -0.998
## eco_metaeco_typeMedium connected to medium:day -0.010631   0.023516  -0.452
## eco_metaeco_typeLarge connected to large:day    0.005529   0.023516   0.235
## eco_metaeco_typeLarge connected to small:day    0.033125   0.023516   1.409
##                                                Pr(>|t|)  
## (Intercept)                                      0.4589  
## eco_metaeco_typeSmall connected to large         0.0230 *
## eco_metaeco_typeMedium connected to medium       0.6299  
## eco_metaeco_typeLarge connected to large         0.3140  
## eco_metaeco_typeLarge connected to small         0.0607 .
## eco_metaeco_typeSmall connected to small:day     0.4546  
## eco_metaeco_typeSmall connected to large:day     0.3231  
## eco_metaeco_typeMedium connected to medium:day   0.6532  
## eco_metaeco_typeLarge connected to large:day     0.8151  
## eco_metaeco_typeLarge connected to small:day     0.1651  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.5565 on 50 degrees of freedom
##   (36 observations deleted due to missingness)
## Multiple R-squared:  0.615,  Adjusted R-squared:  0.5456 
## F-statistic: 8.873 on 9 and 50 DF,  p-value: 7.818e-08
  1. Check that the residuals of the model are okay (model diagnostics)
par(mfrow = c(2, 3))
plot(full_model, which = 1:5)

  1. Construct the null model for both hypotheses, where there is no dependency of bioarea per volume on either patch type or the interaction between patch type and day.
null_model = lm(shannon_d ~
                  1,
                data = filtered_data)
  1. Get the p value and the delta AIC of the full model compared to the null model.
AIC = AIC(full_model, null_model) %>% 
  rownames_to_column("model")

AIC_full = AIC %>%
           filter(model == "full_model") %>%
           pull(AIC)

AIC_null = AIC %>%
           filter(model == "null_model") %>%
           pull(AIC)

deltaAIC = AIC_full - AIC_null
deltaAIC_connected_alpha = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: shannon_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: shannon_d ~ 1
##   Res.Df    RSS Df Sum of Sq      F    Pr(>F)    
## 1     50 15.484                                  
## 2     59 40.214 -9    -24.73 8.8728 7.818e-08 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>F)`[2]
if(p_value < 0.001){
  p_value_connected_alpha = "< 0.001"
} else {
  p_value_connected_alpha = round(p_value, digits = 3)
}
print(paste0("P value = ", p_value_connected_alpha, ", ΔAIC = ", deltaAIC_connected_alpha))
## [1] "P value = < 0.001, ΔAIC = -39.26"
  1. Test the effects of patch type

    Construct the null model.

null_model = lm(shannon_d ~
                  eco_metaeco_type : day,
                data = filtered_data)

Get the p value and AIC.

AIC = AIC(full_model, null_model) %>% 
  rownames_to_column("model")

AIC_full = AIC %>%
           filter(model == "full_model") %>%
           pull(AIC)

AIC_null = AIC %>%
           filter(model == "null_model") %>%
           pull(AIC)

deltaAIC = AIC_full - AIC_null
deltaAIC_connected_alpha = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: shannon_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: shannon_d ~ eco_metaeco_type:day
##   Res.Df    RSS Df Sum of Sq      F   Pr(>F)   
## 1     50 15.484                                
## 2     54 21.956 -4   -6.4718 5.2245 0.001352 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
p_value = anova$`Pr(>F)`[2]
if (p_value < 0.001){
  p_value_connected_alpha = "< 0.001"
} else {
  p_value_connected_alpha = round(p_value, digits = 3)
}
print(paste0("P value = ", p_value_connected_alpha, ", ΔAIC = ", deltaAIC_connected_alpha))
## [1] "P value = 0.001, ΔAIC = -12.95"
  1. Test the effect of the interaction between patch type and day (eco_metaeco_type * day).

    Construct the null model.

null_model = lm(shannon_d ~
                  eco_metaeco_type,
                data = filtered_data)

Get the p value and AIC.

AIC = AIC(full_model, null_model) %>% 
  rownames_to_column("model")

AIC_full = AIC %>%
           filter(model == "full_model") %>%
           pull(AIC)

AIC_null = AIC %>%
           filter(model == "null_model") %>%
           pull(AIC)

deltaAIC = AIC_full - AIC_null
deltaAIC_connected_alpha = round(deltaAIC, digits = 2)

anova = anova(full_model, null_model)
anova
## Analysis of Variance Table
## 
## Model 1: shannon_d ~ eco_metaeco_type + eco_metaeco_type:day
## Model 2: shannon_d ~ eco_metaeco_type
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1     50 15.484                           
## 2     55 16.663 -5   -1.1791 0.7615 0.5818
p_value = anova$`Pr(>F)`[2]
if (p_value < 0.001){
  p_value_connected_alpha = "< 0.001"
} else {
  p_value_connected_alpha = round(p_value, digits = 3)
}
print(paste0("P value = ", p_value_connected_alpha, ", ΔAIC = ", deltaAIC_connected_alpha))
## [1] "P value = 0.582, ΔAIC = 5.6"

So, it seems like the biomass of the small connected to large and the large connected to small differs, but it doesn’t differ across time. But:

  1. Can we conclude that the effect size for the large connected to small is negative compared to its isolated counterpart, and the effect size for the small connected to large is positive compared to its isolated counterpart?
  2. It seems like the effect size of the large connected to small is decreasing, meanwhile the small connected to large is increasing. Is testing the interaction between patch type and day enough? Or should I test this for each of the patch types?
  3. How do we account for the non linearity in the ES trend of the small connected to large?

Evenness

response_variable = "evenness_pielou"
plot.patches.points.ES(eco_metaeco_type_input,
                       paste0(response_variable, "_d"))
## Warning: Removed 8 rows containing missing values (`geom_point()`).
## Warning: Removed 8 rows containing missing values (`geom_line()`).

Species densities

for (species_input in protist_species){
  
  response_variable = species_input
  
  print(plot.patches.points.ES(eco_metaeco_type_input,
                              paste0(response_variable, "_d")))
  
}

## Warning: Removed 4 rows containing missing values (`geom_point()`).

## Warning: Removed 10 rows containing missing values (`geom_point()`).
## Warning: Removed 10 rows containing missing values (`geom_line()`).

## Warning: Removed 2 rows containing missing values (`geom_point()`).

## Warning: Removed 10 rows containing missing values (`geom_point()`).
## Removed 10 rows containing missing values (`geom_line()`).

## Warning: Removed 2 rows containing missing values (`geom_point()`).
## Warning: Removed 2 rows containing missing values (`geom_line()`).

## Warning: Removed 6 rows containing missing values (`geom_point()`).
## Warning: Removed 6 rows containing missing values (`geom_line()`).

## Warning: Removed 26 rows containing missing values (`geom_point()`).
## Warning: Removed 26 rows containing missing values (`geom_line()`).

Species dominance

for (species_input in protist_species){
  
  response_variable = paste0(species_input, "_dominance_d")
  
  print(plot.patches.points.ES(eco_metaeco_type_input,
                              response_variable))
  
}
## Warning: Removed 8 rows containing missing values (`geom_point()`).
## Warning: Removed 8 rows containing missing values (`geom_line()`).

## Warning: Removed 8 rows containing missing values (`geom_point()`).
## Removed 8 rows containing missing values (`geom_line()`).

## Warning: Removed 8 rows containing missing values (`geom_point()`).
## Removed 8 rows containing missing values (`geom_line()`).

## Warning: Removed 10 rows containing missing values (`geom_point()`).
## Warning: Removed 10 rows containing missing values (`geom_line()`).

## Warning: Removed 8 rows containing missing values (`geom_point()`).
## Warning: Removed 8 rows containing missing values (`geom_line()`).

## Warning: Removed 10 rows containing missing values (`geom_point()`).
## Warning: Removed 10 rows containing missing values (`geom_line()`).

## Warning: Removed 8 rows containing missing values (`geom_point()`).
## Warning: Removed 8 rows containing missing values (`geom_line()`).

## Warning: Removed 8 rows containing missing values (`geom_point()`).
## Removed 8 rows containing missing values (`geom_line()`).

## Warning: Removed 8 rows containing missing values (`geom_point()`).
## Removed 8 rows containing missing values (`geom_line()`).

## Warning: Removed 8 rows containing missing values (`geom_point()`).
## Removed 8 rows containing missing values (`geom_line()`).

## Warning: Removed 26 rows containing missing values (`geom_point()`).
## Warning: Removed 26 rows containing missing values (`geom_line()`).

Species composition

for(time_point_input in first_time_point:last_time_point) {
  
  print(paste("Time point: ", time_point_input))
  
  print(
    plot.patches.species.composition.stacked(eco_metaeco_type_input,
                                             time_point_input)
  )
  
}
## [1] "Time point:  0"

## [1] "Time point:  1"

## [1] "Time point:  2"

## [1] "Time point:  3"

## [1] "Time point:  4"

## [1] "Time point:  5"

## [1] "Time point:  6"

## [1] "Time point:  7"

Size classes densities

for(size_class_input in 1:nr_of_size_classes){
  
  print(plot.patches.classes.points.ES(eco_metaeco_type_input,
                                       size_class_input))
  
}

## Warning: Removed 1 rows containing missing values (`geom_point()`).
## Warning: Removed 1 row containing missing values (`geom_line()`).

## Warning: Removed 7 rows containing missing values (`geom_point()`).
## Warning: Removed 7 rows containing missing values (`geom_line()`).

## Warning: Removed 7 rows containing missing values (`geom_point()`).
## Removed 7 rows containing missing values (`geom_line()`).

## Warning: Removed 10 rows containing missing values (`geom_point()`).
## Warning: Removed 9 rows containing missing values (`geom_line()`).

## Warning: Removed 12 rows containing missing values (`geom_point()`).
## Warning: Removed 12 rows containing missing values (`geom_line()`).

## Warning: Removed 10 rows containing missing values (`geom_point()`).
## Warning: Removed 7 rows containing missing values (`geom_line()`).

## Warning: Removed 12 rows containing missing values (`geom_point()`).
## Warning: Removed 12 rows containing missing values (`geom_line()`).

## Warning: Removed 11 rows containing missing values (`geom_point()`).
## Warning: Removed 11 rows containing missing values (`geom_line()`).

## Warning: Removed 12 rows containing missing values (`geom_point()`).
## Warning: Removed 12 rows containing missing values (`geom_line()`).

## Warning: Removed 12 rows containing missing values (`geom_point()`).
## Removed 12 rows containing missing values (`geom_line()`).

## Warning: Removed 12 rows containing missing values (`geom_point()`).
## Removed 12 rows containing missing values (`geom_line()`).

Median body size

plot.patches.median.body.size.points.ES(eco_metaeco_type_input)
## Warning: Removed 1 rows containing missing values (`geom_point()`).
## Warning: Removed 1 row containing missing values (`geom_line()`).

Productivity drivers

Species richness

All data points

ds_patches %>%
  filter(disturbance == disturbance_input) %>%
  ggplot(aes(x = species_richness,
             y = bioarea_per_volume)) +
  geom_point()

Single treatments & time points

Small isolated
plot.relationship.BEF("Small isolated",
                      "species_richness")

Medium isolated
plot.relationship.BEF("Medium isolated",
                      "species_richness")

Large isolated
plot.relationship.BEF("Large isolated",
                      "species_richness")

Small connected to large
plot.relationship.BEF("Small connected to large",
                      "species_richness")

Large connecred to small
plot.relationship.BEF("Large connected to small",
                      "species_richness")

Shannon index

All data points

ds_patches %>%
  filter(disturbance == disturbance_input) %>%
  ggplot(aes(x = shannon,
             y = bioarea_per_volume)) +
  geom_point()

Single treatments & time points

Small isolated
plot.relationship.BEF("Small isolated",
                      "shannon")

Medium isolated
plot.relationship.BEF("Medium isolated",
                      "shannon")

Large isolated
plot.relationship.BEF("Large isolated",
                      "shannon")

Small connected to large
plot.relationship.BEF("Small connected to large",
                      "shannon")

Large connecred to small
plot.relationship.BEF("Large connected to small",
                      "shannon")

Evenness

All data points

ds_patches %>%
  filter(disturbance == disturbance_input) %>%
  ggplot(aes(x = evenness_pielou,
             y = bioarea_per_volume)) +
  geom_point()
## Warning: Removed 14 rows containing missing values (`geom_point()`).

Single treatments & time points

Small isolated
plot.relationship.BEF("Small isolated",
                      "evenness_pielou")
## Warning: Removed 5 rows containing missing values (`geom_point()`).

Medium isolated
plot.relationship.BEF("Medium isolated",
                      "evenness_pielou")

Large isolated
plot.relationship.BEF("Large isolated",
                      "evenness_pielou")

Small connected to large
plot.relationship.BEF("Small connected to large",
                      "evenness_pielou")
## Warning: Removed 4 rows containing missing values (`geom_point()`).

Large connecred to small
plot.relationship.BEF("Large connected to small",
                      "evenness_pielou")

Species dominance

All data points

for(species_input in protist_species){
  
  print(ds_patches %>%
  filter(disturbance == disturbance_input) %>%
  ggplot(aes(x = get(paste0(species_input, "_dominance")),
             y = bioarea_per_volume)) +
  geom_point() +
    labs(x = paste0(species_input, "_dominance")))
  
}
## Warning: Removed 12 rows containing missing values (`geom_point()`).

## Warning: Removed 11 rows containing missing values (`geom_point()`).

## Warning: Removed 12 rows containing missing values (`geom_point()`).

## Warning: Removed 12 rows containing missing values (`geom_point()`).

## Warning: Removed 12 rows containing missing values (`geom_point()`).

## Warning: Removed 12 rows containing missing values (`geom_point()`).

## Warning: Removed 12 rows containing missing values (`geom_point()`).

## Warning: Removed 12 rows containing missing values (`geom_point()`).

## Warning: Removed 11 rows containing missing values (`geom_point()`).

## Warning: Removed 12 rows containing missing values (`geom_point()`).

## Warning: Removed 12 rows containing missing values (`geom_point()`).

Single treatments & time points

Small isolated
plot.relationship.dominance.productivity("Small isolated",
                                         protist_species)
## Warning: Removed 6 rows containing missing values (`geom_point()`).

## Warning: Removed 5 rows containing missing values (`geom_point()`).

## Warning: Removed 6 rows containing missing values (`geom_point()`).

## Warning: Removed 6 rows containing missing values (`geom_point()`).

## Warning: Removed 6 rows containing missing values (`geom_point()`).

## Warning: Removed 6 rows containing missing values (`geom_point()`).

## Warning: Removed 6 rows containing missing values (`geom_point()`).

## Warning: Removed 6 rows containing missing values (`geom_point()`).

## Warning: Removed 5 rows containing missing values (`geom_point()`).

## Warning: Removed 6 rows containing missing values (`geom_point()`).

## Warning: Removed 6 rows containing missing values (`geom_point()`).

Medium isolated
plot.relationship.dominance.productivity("Medium isolated",
                                         protist_species)

Large isolated
plot.relationship.dominance.productivity("Large isolated",
                                         protist_species)

Small connected to large
plot.relationship.dominance.productivity("Small connected to large",
                                         protist_species)

Large connected to small
plot.relationship.dominance.productivity("Large connected to small",
                                         protist_species)

Evaporation

Evaporation tests

Evaporation when microwaving 15 falcon tubes at the time

evaporation.test = read.csv(here("data", "evaporation_test","evaporation_test_right.csv"), header = TRUE)

evaporation.test %>%
  ggplot(aes (x = as.character(water_pipetted),
                y = weight_water_evaporated,
                group = interaction(water_pipetted, as.character(rack)),
                fill = as.character(rack))) +
  geom_boxplot(width = boxplot_width) +
  labs(x = "Water volume (ml)" , 
       y = "Evaporation (g)", 
       fill = "Rack replicate")

Evaporation when microwaving 5 tubes with 10 filled or empty tubes

evaporation.test = read.csv(here("data", "evaporation_test", "evaporation_test_fill_nofill.csv"), header = TRUE)

evaporation.test %>%
  ggplot(aes (x = all_tubes_water,
              y = weight_water_evaporated)) +
  geom_boxplot(width = boxplot_width) +
  labs(x = "Water in the other 10 tubes" , 
  y = "Evaporation (g)", 
  caption = "When all tubes were filled, they were filled with 6.75 ml of deionised water.")

Video analysis script

rm(list = ls())
setwd("/media/mendel-himself/ID_061_Ema2/PatchSizePilot/training")

# load package
# library(devtools)
# install_github("femoerman/bemovi", ref="master")
library(bemovi)
library(parallel)
library(doParallel)
library(foreach)

#Define memory to be allocated
memory.alloc <- 240000 #-needs_to_be_specified
memory.per.identifier <- 40000 #-needs_to_be_specified
memory.per.linker <- 5000 #-needs_to_be_specified
memory.per.overlay <- 60000 #-needs_to_be_specified

# UNIX
# set paths to tools folder and particle linker
tools.path <-
  "/home/mendel-himself/bemovi_tools/" #-needs_to_be_specified
to.particlelinker <- tools.path

# directories and file names
to.data <- paste(getwd(), "/", sep = "")
video.description.folder <- "0_video_description/"
video.description.file <- "video_description.txt"
raw.video.folder <- "1_raw/"
raw.avi.folder <- "1a_raw_avi/"
metadata.folder <- "1b_raw_meta/"
particle.data.folder <- "2_particle_data/"
trajectory.data.folder <- "3_trajectory_data/"
temp.overlay.folder <- "4a_temp_overlays/"
overlay.folder <- "4_overlays/"
merged.data.folder <- "5_merged_data/"
ijmacs.folder <- "ijmacs/"


######################################################################
# VIDEO PARAMETERS

# video frame rate (in frames per second)
fps <- 25 #-needs_to_be_specified

# length of video (in frames)
total_frames <- 125 #-needs_to_be_specified

#Dimensions of the videos in pixels
width = 2048 #-needs_to_be_specified
height = 2048 #-needs_to_be_specified

# measured volume (in microliter) #-needs_to_be_specified
measured_volume <-
  34.4 # for Leica M205 C with 1.6 fold magnification, sample height 0.5 mm and Hamamatsu Orca Flash 4
#measured_volume <- 14.9 # for Nikon SMZ1500 with 2 fold magnification, sample height 0.5 mm and Canon 5D Mark III

# size of a pixel (in micrometer) #-needs_to_be_specified
pixel_to_scale <-
  4.05 # for Leica M205 C with 1.6 fold magnification, sample height 0.5 mm and Hamamatsu Orca Flash 4
#pixel_to_scale <- 3.79 # for Nikon SMZ1500 with 2 fold magnification, sample height 0.5 mm and Canon 5D Mark III

# specify video file format (one of "avi","cxd","mov","tiff")
# bemovi only works with avi and cxd. other formats are reformated to avi below
video.format <- "cxd" #-needs_to_be_specified

# setup
difference.lag <- 10
thresholds <- c(13, 255) # don't change the second value
# thresholds <- c(50,255)

# MORE PARAMETERS (USUALLY NOT CHANGED)
######################################################################
# FILTERING PARAMETERS
# optimized for Perfex Pro 10 stereomicrocope with Perfex SC38800 (IDS UI-3880LE-M-GL) camera
# tested stereomicroscopes: Perfex Pro 10, Nikon SMZ1500, Leica M205 C
# tested cameras: Perfex SC38800, Canon 5D Mark III, Hamamatsu Orca Flash 4
# tested species: Tet, Col, Pau, Pca, Eug, Chi, Ble, Ceph, Lox, Spi

# min and max size: area in pixels
particle_min_size <- 10
particle_max_size <- 1000

# number of adjacent frames to be considered for linking particles
trajectory_link_range <- 3
# maximum distance a particle can move between two frames
trajectory_displacement <- 16

# these values are in the units defined by the parameters above: fps (seconds), measured_volume (microliters) and pixel_to_scale (micometers)
filter_min_net_disp <- 25
filter_min_duration <- 1
filter_detection_freq <- 0.1
filter_median_step_length <- 3

######################################################################
# VIDEO ANALYSIS

#Check if all tools are installed, and if not install them
check_tools_folder(tools.path)

#Ensure computer has permission to run bftools
system(paste0("chmod a+x ", tools.path, "bftools/bf.sh"))
system(paste0("chmod a+x ", tools.path, "bftools/bfconvert"))
system(paste0("chmod a+x ", tools.path, "bftools/showinf"))

# Convert files to compressed avi (takes approx. 2.25 minutes per video)
convert_to_avi(
  to.data,
  raw.video.folder,
  raw.avi.folder,
  metadata.folder,
  tools.path,
  fps,
  video.format
)


# TESTING

# check file format and naming
# check_video_file_names(to.data,raw.avi.folder,video.description.folder,video.description.file)

# check whether the thresholds make sense (set "dark backgroud" and "red")
#check_threshold_values(to.data, raw.avi.folder, ijmacs.folder, 2, difference.lag, thresholds, tools.path,  memory.alloc)

# identify particles
locate_and_measure_particles(
  to.data,
  raw.avi.folder,
  particle.data.folder,
  difference.lag,
  min_size = particle_min_size,
  max_size = particle_max_size,
  thresholds = thresholds,
  tools.path,
  memory = memory.alloc,
  memory.per.identifier = memory.per.identifier,
  max.cores = detectCores() - 1
)

# link the particles
link_particles(
  to.data,
  particle.data.folder,
  trajectory.data.folder,
  linkrange = trajectory_link_range,
  disp = trajectory_displacement,
  start_vid = 1,
  memory = memory.alloc,
  memory_per_linkerProcess = memory.per.linker,
  raw.avi.folder,
  max.cores = detectCores() - 1,
  max_time = 1
)

# merge info from description file and data
merge_data(
  to.data,
  particle.data.folder,
  trajectory.data.folder,
  video.description.folder,
  video.description.file,
  merged.data.folder
)

# load the merged data
load(paste0(to.data, merged.data.folder, "Master.RData"))

# filter data: minimum net displacement, their duration, the detection frequency and the median step length
trajectory.data.filtered <-
  filter_data(
    trajectory.data,
    filter_min_net_disp,
    filter_min_duration,
    filter_detection_freq,
    filter_median_step_length
  )

# summarize trajectory data to individual-based data
morph_mvt <-
  summarize_trajectories(
    trajectory.data.filtered,
    calculate.median = F,
    write = T,
    to.data,
    merged.data.folder
  )

# get sample level info
summarize_populations(
  trajectory.data.filtered,
  morph_mvt,
  write = T,
  to.data,
  merged.data.folder,
  video.description.folder,
  video.description.file,
  total_frames
)

# create overlays for validation
create.subtitle.overlays(
  to.data,
  traj.data = trajectory.data.filtered,
  raw.video.folder,
  raw.avi.folder,
  temp.overlay.folder,
  overlay.folder,
  fps,
  vid.length = total_frames / fps,
  width,
  height,
  tools.path = tools.path,
  overlay.type = "number",
  video.format
)

# Create overlays (old method)
create_overlays(
  traj.data = trajectory.data.filtered,
  to.data = to.data,
  merged.data.folder = merged.data.folder,
  raw.video.folder = raw.avi.folder,
  temp.overlay.folder = "4a_temp_overlays_old/",
  overlay.folder = "4_overlays_old/",
  width = width,
  height = height,
  difference.lag = difference.lag,
  type = "traj",
  predict_spec = F,
  contrast.enhancement = 1,
  IJ.path = "/home/mendel-himself/bemovi_tools",
  memory = memory.alloc,
  max.cores = detectCores() - 1,
  memory.per.overlay = memory.per.overlay
)

########################################################################
# some cleaning up
#system("rm -r 2_particle_data")
#system("rm -r 3_trajectory_data")
#system("rm -r 4a_temp_overlays")
system("rm -r ijmacs")
########################################################################

Paper

Title page

Title: How meta-ecosystem dynamics can homogenise ecosystem function across patch size distribution

Authors (add affiliations): Emanuele Giacomuzzo, Tianna Peller, Isabelle Gounand, Florian Altermatt

Number of words: … Number of figures: …

Abstract

Keywords: …

Introduction

In the past decades, there has been a big push to study ecosystem function (e.g., Naeem et al. (1994); Tilman and Downing (1994)). We want to the functioning of ecosystems so that we can then preserve it in face of global change so that nature can keep providing those services that our societies are based on. Experiments that were carried out at a local level. However, this is only a single patch approach to ecosystem function. If we want to understand how ecosystem function is determined at a landscape scale, we need to scale up our research (Gonzalez et al. (2020)). In other words, we need to look at the landscape and see what patches they are made of and how they are connected.

Alongside dispersal (Gonzalez, Mouquet, and Loreau (2009)), the flow of resources (non-living material, which can be other detritus and/or inorganic nutrients) has been shown to play a role in determining ecosystem function. For example, the reciprocal flow of limiting nutrients has been experimentally found to increase ecosystem function when heterogeneity would otherwise partition different nutrients into different ecosystems, making them being used less efficiently (Gülzow, Wahlen, and Hillebrand (2019)). Also, how connection between ecosystems are configured, how many ecosystems each ecosystem is connected to, and the number of connections of the most connected ecosystem have been found to influence ecosystem function (Marleau, Guichard, and Loreau (2014)). Ecosystems that are connected through the flow of resources are referred to as meta-ecosystems (Loreau, Mouquet, and Holt (2003)).

However, these models have been all considering all patches being the same size. The size of both the receiving and donor patches has the potential to alter meta-ecosystem dynamics. First, the size of the donor ecosystem. Larger patches have more species (MacArthur and Wilson (1963)) and are therefore predicted to have higher function ( Benedetti-Cecchi (2005)), they will produce more detritus (not tested yet). Larger patches also produce more detritus just because of geometrical reasons. For example, lotic and lentic systems both provide emerging insects to terrestrial ecosystems. The larger the river/lake, the more emerging insects will reach the terrestrial ecosystem, reason why ( Gratton and Zanden (2009)). Also, larger donor patches sometimes contain higher trophic levels (Post, Pace, and Hairston (2000)) and therefore will produce detritus with higher amounts of nitrogen and phosphorus compared to carbon (reference). Also, as larger ecosystems are more resistant (Greig et al. (2022)), they will produce less detritus. The size of the receiving ecosystem is also important. As smaller ecosystems should be more permeable to detritus (higher perimeter:area ratio). For example, resources flowing from the sea to an island increase secondary production the most in small islands ( Polis and Hurd (1996)) and salmon subsidies have the strongest effects in small river watersheds (why?) (Hocking and Reimchen (2009)). As larger ecosystems are more resistant (Greig et al. (2022)), they will need less detritus to counteract the effects of perturbations that is creating resource flow.

Here, we test how patch size alters meta-ecosystem function using a protist microcosm experiment ( Altermatt et al. (2015)). We here refer to biomass production as ecosystem function, which we use interchangeably. Two meta-ecosystems of the same total volume but of patches of different size were constructed. The first meta-ecosystem was composed of a small and a large patch (small-large meta-ecosystem). The other meta-ecosystem was composed of two medium patches (medium-medium meta-ecosystem). All patches started with the same protist community (nine water ciliates, one alga, and one rotifer). Resources flowed bidirectionally between the two patches, with the same magnitude. Resource flows were created by boiling a fixed volume of the community and poring it into the receiving patch. This caused small patches to be more disturbed than medium patches and medium patches be more disturbed than large patches. No dispersal occurred throughout the experiment. We additionally created also the following control treatments: isolated small, medium, and large patches, as well as meta-ecosystems with two small patches (small-small meta-ecosystems), meta-ecosystems with two medium patches (medium-medium meta-ecosystems), and meta-ecosystems with two large patches (large-large meta-ecosystems).

The inflow of resources coming from a productive patch have already been shown to be beneficial for the functioning of ecosystems under perturbations (Colombo (2021)). Therefore, small patches that receive a lot of resources will recover better from the perturbations that cause resource flow, meanwhile the larger patches will recover worse. The effects of resource flow on meta-ecosystem function will depend on how much large patches increase the function of the small patches and how much the small patches decrease the function of the large patches.

Methods

Experimental design

The effects of patch size on local and regional meta-ecosystem properties was examined through a protist microcosm experiment (Altermatt et al. (2015)). Each meta-ecosystem consisted of two cultures that were exposed to a disturbance regime. During each disturbance event, a portion of the community was transformed into detritus, which flowed bidirectionally between cultures, connecting them through resource flow. Within a meta-ecosystem, only non-living material was exchanged, and no organisms dispersed. [polished]

Our focal meta-ecosystem was made up of a small patch (7.5 ml) and a large patch (37.5 ml). To investigate the effect of such a size difference on regional properties, we compared this meta-ecosystem to another meta-ecosystem of the same total volume (45 ml) that contained two medium-sized patches (22.5 ml) instead of a small and large patch. To analyze local patch properties, we compared the small patch to other small patches that were either connected to another small patch or isolated instead of being connected to a large patch. Additionally, we evaluated the effect of patch size by comparing isolated patches of different sizes (small, medium, and large) and meta-ecosystems with patches of different sizes (meta-ecosystems with small, medium, and large patches). We referred to the meta-ecosystems based on the size of their patches (e.g., meta-ecosystems with a small and a large patch were called small-large meta-ecosystems). [polished]

All meta-ecosystems and isolated patch treatments were exposed to either low or high disturbance intensities. This resulted in a full factorial design in which we varied (1) the size of meta-ecosystems and isolated patches and (2) disturbance intensity. Each treatment was replicated five times, resulting in a total of 110 microcosms (30 isolated patches and 80 meta-ecosystem patches). Please see Figure 1 for more details. [polished]

Experimental setup

In preparation for the experiment, we cultured protist densities to their carrying capacity eight days before assembly. This was accomplished by using autoclaved bottles with medium, two wheat seeds, and a bacterial mix comprising Serratia fonticola, Bacillus subtilis, and Brevibacillus brevis (see Altermatt et al. (2015) for protocols). The standard protist medium (0.46 g/L of Protozoa Pellet by Carolina) was used for the medium, and the bacterial mix constituted 5% of the total culture volume. On the day of the experiment’s assembly, we inoculated a large, autoclaved bottle with the eleven protist species, with the same volume being inoculated for each species. The final total volume of the large bottle was 15%, and this was pipetted into sterile 50 ml centrifuge tubes (SPL life sciences skirted conical centrifuge tubes). We pipetted 7.5 ml, 22.5 ml, and 37.5 ml into the small, medium, and large patches, respectively. The cultures were then randomized on four foam boards and kept in an incubator at 20°C with constant lighting. The community of protists we refer to here consists of nine water ciliates (Euplotes aediculatus, Colpidium sp., Loxocephalus sp., Paramecium aurelia, Paramecium caudatum, Spirostomum sp., Spirostomum teres, Tetrahymena cf. pyriformis, and Blepharisma sp.), one alga (Euglena gracilis), and one rotifer (Cephalodella sp.). [polished]

Resource flow

The experiment involved six disturbance events occurring every four days, starting from day 5 (on days 5, 9, 13, 17, 21, and 25). During a disturbance event, subsamples of the cultures (5.25 ml for low disturbance and 6.75 ml for high disturbance) were boiled using a microwave, transforming the community into detritus. These subsamples corresponded to 70% and 90% of the volume of the small patches for low and high disturbance, respectively. In isolated patches, the boiled subsample was poured back into the original patch, whereas in meta-ecosystems, it was poured into the connected patch. This resource flow method imitated the detritus flow resulting from the death of organisms from patch recurrent disturbance. As the volume exchanged between patches remained the same (e.g., 5.25 ml flowed from patch 1 to 2 and 5.25 ml from patch 2 to 1), the patch volume remained constant over time. [polished]

Sampling

Throughout the experiment, we monitored changes in community dynamics over time. Sampling occurred eight times, every four days (on days 0, 4, 8, 12, 16, 20, 24, and 28). At each sampling, we collected 0.2 ml samples per microcosm. We followed a standardized video procedure (Pennekamp and Schtickzelle (2013); Pennekamp, Schtickzelle, and Petchey (2015)) by recording a five-second video for each sample. The sample was placed under a dissecting microscope connected to a camera, which recorded the culture for five seconds. Using the R-package BEMOVI (Pennekamp, Schtickzelle, and Petchey (2015)), we extracted the number of moving organisms, along with their traits (e.g., speed, shape, size), from the images using image processing software (ImageJ). These traits were then utilized to filter out background movement noise (e.g., medium particles) and to identify species in mixed cultures. [polished] [Add species ID]

Volume Balance

Throughout the experiment, we observed and accounted for variations in evaporation from microwaving across microcosms. For the initial three exchange events, we boiled 15 tubes in a rack at 800 W for three minutes using a microwave (Sharp R-202). However, due to high evaporation volumes of 2.43 ml (SD = 0.87), we boiled four tubes for one minute for the last three exchanges. This change in boiling protocol resulted in a mean evaporation rate of 1.25 ml (SD = 0.37). [polished]

The evaporated water was replenished with autoclaved deionized water. Before the two exchange events, 1 ml of water was added to all tubes. However, before the third exchange event, we observed higher than anticipated evaporation rates, and the cultures were on average 1.17 ml (SD = 0.37) smaller than their initial volumes. Therefore, before the third exchange and after each subsequent exchange, we refilled the cultures with water until they reached their initial volume. During the first exchange event, we microwaved most tubes with other full tubes, except for the last five tubes, which were microwaved with ten empty tubes. Replacing full tubes with empty ones resulted in a higher evaporation rate. These tubes were all part of the high disturbance small-large meta-ecosystem treatment. To compensate, we added 3.15 ml of water just before the second resource exchange (as we calculated that this was the evaporated volume difference). We microwaved all tubes with other full tubes during subsequent exchange events. [polished]

Additionally, we added medium to the cultures during each exchange event to account for the volume sampled at each time point (0.2 ml). However, medium was not added during the sixth exchange as it was right before the final time point. Sampling 0.2 ml of culture at the last time point would not have affected the results as it was the last day of the experiment. [polished]

Statistical analysis

Mixed effect models

To perform mixed effect modeling on our data, we utilized the R package lme4. The code syntax was obtained from the vignette available at https://cran.r-project.org/web/packages/lme4/vignettes/lmer.pdf. Interaction terms were coded following the guidelines provided in chapter 9 of the book “An R Companion to Statistics: Data Analysis and Modeling” by Maarten Speekenbrink (https://mspeekenbrink.github.io/sdam-r-companion/linear-mixed-effects-models.html). Model diagnostics were conducted using quantile-quantile plots (plot(mixed_model)) and partial residual plots (qqnorm(resid(mixed_model))), as recommended by Zuur et al. (2009) (page 487).

The effect size of the explanatory variables was computed as marginal and conditional R squared. The marginal R squared represents the amount of variance explained by the fixed effects, while the conditional R squared accounts for both the fixed and random effects. The calculation of marginal and conditional R squared was performed using the MuMIn package, based on the methods of Nakagawa, Johnson, and Schielzeth (2017). For the coding and interpretation of these R squared values, refer to the documentation for the r.squaredGLMM function.

Time can either be included as a fixed or random effect. If the data points are dependent on each other (e.g. seasons), time should be included as a random effect. However, since the biomass in our experiment showed a temporal trend and exhibited autocorrelation (i.e. t2 is more similar to t3 than t4), we decided to include time as a fixed effect. For a comprehensive discussion on this topic, see this blog post: https://dynamicecology.wordpress.com/2015/11/04/is-it-a-fixed-or-random-effect/.

Some interactions cannot be tested in R, as described in this Stack Overflow thread (https://stackoverflow.com/questions/40729701/how-to-use-formula-in-r-to-exclude-main-effect-but-retain-interaction). This is specifically the case when the interaction is between a factor (e.g., eco_metaeco_type) and a continuous variable (e.g. day), and the factor is the primary predictor. This limitation is due to the model.matrix.default function. To test the effects of day, I must remove the interaction term with eco_metaeco_type.

Effect sizes

To determine how response variables change across treatments, we chose to use an effect size with the control being a patch of the same size but isolated. Initially, we considered using the natural logarithm of the response ratio (lnRR), but some values of the response variables in the control or treatment were 0. Adding 1 to all null values, as suggested in the literature, would have inflated the effect sizes. As a result, we searched for alternative effect size metrics and found that Hedge’s d (also known as Hedge’s g) is the most commonly used and preferred metric today (Hedges and Olkin (1985)). Hedge’s d is calculated as the difference in mean between the treatment and control groups divided by the standard deviation of the pooled data. Another option would have been Cohen’s d, but this metric is less effective for sample sizes smaller than 20 (as discussed in this post on StatisticsHowTo: https://www.statisticshowto.com/hedges-g/).

Model selection

Model selection was performed using the Akaike Information Criterion (AIC). A difference in AIC of 2 was considered to be the threshold that distinguishes between two models, with the model having the lower AIC being preferred. If the difference in AIC does not exceed 2, the model with the smallest number of parameters is preferred. This approach was suggested by Halsey (2019) as an alternative to using p-values. P-values are not a reliable method of model selection due to the small sample size (five replicates), which can result in larger p-values, and because p-values can be highly variable, leading to many false positives and negatives (e.g., with a p-value of 0.05, there is a 1 in 3 chance that it’s a false positive).

Size distribution

The selection of the number of size classes was arbitrary and based on the same number used by Jacquet, Gounand, and Altermatt (2020). Currently, there is no established standard for optimizing the number of size classes when analyzing body size distributions in ecology (Loder, Blackburn, and Gaston (1997)).

Confidence intervals

To determine the 95% confidence interval of means, I used a code that can be found on a Stack Overflow thread (https://stackoverflow.com/questions/35953394/calculating-length-of-95-ci-using-dplyr). The code is based on the formula presented in the Lumen Learning course “Introduction to Statistics” (https://courses.lumenlearning.com/introstats1/chapter/a-single-population-mean-using-the-student-t-distribution/). The formula to calculate the upper and lower bounds of the 95% confidence interval is as follows:

\[ CI_{lower} = mean - (t \: score * \frac{SD}{sample \: size}) \]

\[ CI_{upper} = mean + (t \: score * \frac{SD}{sample \: size}) \]

where the t-score is computed from the student t distribution using the r function qt with percentile = 0.975 and degrees of freedom = sample size - 1.

To determine the 95% confidence interval of Hedge’s d, I used a formula that is part of DATAPLOT, a software system for scientific visualization, statistical analysis, and non-linear modeling provided by the US government (as described in the following link: https://www.itl.nist.gov/div898/software/dataplot/refman1/auxillar/hedges_g.htm). The method was also suggested in a Cross Validated thread (https://stats.stackexchange.com/questions/460367/calculate-effect-size-and-confidence-interval-from-published-means±std). I also cited https://www.meta-analysis.com/downloads/Meta-analysis%20Effect%20sizes%20based%20on%20means.pdf, but I am unsure why.

Hedge’s d ± 95% confidence interval is computed as Hedge’s d ± 1.96 * SE, where SE is calculated as follows:

\[ SE_g = \sqrt{V_g} \]

\[ V_g = J^2 * V_d \]

\[ J = 1 - \frac{3}{4df-1} \]

\[ V_d = \frac{n1 + n2}{n_1 n_2} + \frac{d^2}{2(n_1 + n_2)} \]

where d is Hedge’s d, n1 is the sample size of group 1, n2 is the sample size of group 2, and df is the degrees of freedom (df = n1 + n2 - 2).

Results

Discussion

The meta-ecosystem theory’s theoretical framework has not considered the significance of ecosystem size, despite its established impact on various ecological levels, including genotypes, populations, and communities. To comprehend how ecosystem function can be scaled up to account for resource flows, integrating ecosystem size into the meta-ecosystem framework is crucial. Our study suggests that patch size symmetry in a meta-ecosystem may not be critical to its operation. The explanation lies in the quantity of detritus exchanged between ecosystems. Detritus is essential for ecosystems to recover from perturbations. In the absence of resource flows, asymmetric size would lead to a more productive meta-ecosystem since the production of large and small isolated patches together was greater than that of two medium-sized patches. However, the reciprocal flow of resources between asymmetric-sized ecosystems can reduce the function of the meta-ecosystem. This is due to the non-symmetric effects of the reciprocal flow of resources on small and large patches. The influx of resources from a large patch enhances the function of a small patch less than the small amount of resources from small patches increases the function of large patches. Consequently, the overall function of the meta-ecosystem decreases.

Our experiment demonstrated that asymmetric patch size has diverse impacts on biodiversity, with alpha diversity decreasing while beta diversity increases, and gamma diversity remains constant. Notably, small patches exhibited low alpha diversity, which contributed to the reduction in mean alpha diversity of the entire meta-ecosystem. However, alpha diversity increased due to resource flow, as the high influx of detritus improved their alpha diversity. As anticipated, the higher beta diversity in asymmetric patches resulted from the difference in alpha diversity between small and large patches. The absence of any change in gamma diversity was not unexpected and necessitates further explanation from Florian.

Patch size has an effect on meta-ecosystem function. However, this will depend upon the amount of resource flow happening between the two ecosystems. When there isn’t any flow between the two ecosystems, small-large meta-ecosystems are the most productive (opposed to what Benedetti-Cecchi (2005) predicted). However, the more detritus is exchanged, the less the function of the meta-ecosystem, as it is mainly driven by the largest system. The small patch increases in function in

The higher species richness experienced by small patches is in line with the Subsidized Island Biogeography Hypothesis (Anderson and Wait (2001)). I need to read Menegotto et al. (2020).

Acknowledgments

Figures

Figure 1. Experimental Design. We constructed meta-ecosytems made out two patches.

ggarrange(
  p_isolated_biomass +
    rremove("xlab") +
    theme(axis.text.x = element_blank(),
          axis.ticks.x = element_blank(),
          plot.margin = unit(c(ggarrange_margin_top,
                               ggarrange_margin_right,
                               ggarrange_margin_bottom,
                               ggarrange_margin_left), 
                             "cm")) +
    font("legend.text", size = size_legend) +
    font("ylab", size = size_y_axis), 
  p_isolated_alpha +
    theme(plot.margin = unit(c(ggarrange_margin_left,
                               ggarrange_margin_right,
                               ggarrange_margin_bottom,
                               ggarrange_margin_left), 
                             "cm")) + 
    font("xlab", size = size_x_axis) + 
    font("ylab", size = size_y_axis) +
  scale_x_continuous(breaks = unique(ds_patches$day)),
  nrow = 2,
  heights = c(0.9,1),
  common.legend = TRUE,
  align = "v"
)
Figure 2. Effect of patch size on (a) isolated patch bioarea density and (b) isolated patch alpha diversity. (a): the larger the patch, the more biomass density it had (bioarea is a proxy for biomass). (b): the larger the patch was, the higher its diversity (Shannon Index) was. All patches were sampled at the same time but points were jettered to make the figure clearar. Vertical grey lines: disturbance events.

Figure 2. Effect of patch size on (a) isolated patch bioarea density and (b) isolated patch alpha diversity. (a): the larger the patch, the more biomass density it had (bioarea is a proxy for biomass). (b): the larger the patch was, the higher its diversity (Shannon Index) was. All patches were sampled at the same time but points were jettered to make the figure clearar. Vertical grey lines: disturbance events.

ggarrange(
  p_MM_SL_metaecos_biomass +
    rremove("xlab") +
    theme(axis.text.x = element_blank(),
          axis.ticks.x = element_blank(),
          plot.margin = unit(c(ggarrange_margin_top,
                               ggarrange_margin_right,
                               ggarrange_margin_bottom,
                               ggarrange_margin_left), 
                             "cm")) +
    font("legend.text", size = size_legend) +
    font("ylab", size = size_y_axis),
  p_MM_SL_metaecos_alpha +
    rremove("xlab") +
    theme(axis.text.x = element_blank(),
          axis.ticks.x = element_blank(),
          plot.margin = unit(c(ggarrange_margin_top,
                               ggarrange_margin_right,
                               ggarrange_margin_bottom,
                               ggarrange_margin_left), 
                             "cm")) +
    font("legend.text", size = size_legend) +
    font("ylab", size = size_y_axis),
  p_MM_SL_metaecos_beta +
    theme(plot.margin = unit(c(ggarrange_margin_left,
                               ggarrange_margin_right,
                               ggarrange_margin_bottom,
                               ggarrange_margin_left), 
                             "cm")) + 
    font("xlab", size = size_x_axis) +
    font("ylab", size = size_y_axis) +
    scale_x_continuous(breaks = unique(ds_patches$day)),
  heights = c(0.8,0.8,1),
  nrow = 3,
  common.legend = TRUE,
  align = "v"
)
Figure 3. Effect of asymmetry in patch size and resource flow on (a) meta-ecosystem total bioarea and (b) meta-ecosystem beta diversity. (a): medium-medium and small-large meta-ecosystems had the same biomass (bioarea is a proxy for biomass). (b): small-large meta-ecosystems maintained higher beta diversity (Bray-Curtis Index). All systems were sampled at the same time but points were jettered to make the figure clearar. Vertical grey lines: disturbance events followed by resource flow.

Figure 3. Effect of asymmetry in patch size and resource flow on (a) meta-ecosystem total bioarea and (b) meta-ecosystem beta diversity. (a): medium-medium and small-large meta-ecosystems had the same biomass (bioarea is a proxy for biomass). (b): small-large meta-ecosystems maintained higher beta diversity (Bray-Curtis Index). All systems were sampled at the same time but points were jettered to make the figure clearar. Vertical grey lines: disturbance events followed by resource flow.

ggarrange(
  p_connected_biomass_effect_size +
    rremove("xlab") +
    theme(axis.text.x = element_blank(),
          axis.ticks.x = element_blank(),
          plot.margin = unit(c(ggarrange_margin_top,
                               ggarrange_margin_right,
                               ggarrange_margin_bottom,
                               ggarrange_margin_left), 
                             "cm")) +
    font("legend.text", size = size_legend) +
    font("ylab", size = size_y_axis), 
  p_connected_alpha_effect_size +
    theme(plot.margin = unit(c(ggarrange_margin_left,
                               ggarrange_margin_right,
                               ggarrange_margin_bottom,
                               ggarrange_margin_left), 
                             "cm")) + 
    font("xlab", size = size_x_axis) + 
    font("ylab", size = size_y_axis) +
    scale_x_continuous(breaks = unique(ds_patches$day)),
  nrow = 2,
  heights = c(0.9,1),
  common.legend = TRUE,
  align = "v"
)
Figure 4. Effect of the resource flow on the patches of meta-ecosystems. The effects of resource flow have been measured as effect size (Hedge's d) compared to their respective isolated patches. Top: effects of resource flow on the biomass (bioarea is used as a proxy for biomass) of the small and the large patches. Flow from the small to the large increased the biomass of the small patch. Flow from the small patch decreased the biomass of the biomass of the large patch. Bottom: effects of resource flow on the alpha diversity (Shannon Index) of the small and the large patches. Flow from the large to the small increased the biodiveristy of the small patch. Flow from the small patch to the large patch had no effect on alpha diversity of the large patch. All patches were sampled at the same time but points were jettered to make the figure clearar. Vertical grey lines: disturbances events followed by resource flow.

Figure 4. Effect of the resource flow on the patches of meta-ecosystems. The effects of resource flow have been measured as effect size (Hedge’s d) compared to their respective isolated patches. Top: effects of resource flow on the biomass (bioarea is used as a proxy for biomass) of the small and the large patches. Flow from the small to the large increased the biomass of the small patch. Flow from the small patch decreased the biomass of the biomass of the large patch. Bottom: effects of resource flow on the alpha diversity (Shannon Index) of the small and the large patches. Flow from the large to the small increased the biodiveristy of the small patch. Flow from the small patch to the large patch had no effect on alpha diversity of the large patch. All patches were sampled at the same time but points were jettered to make the figure clearar. Vertical grey lines: disturbances events followed by resource flow.

Figure legends

Appendix

p_MM_SL_metaecos_gamma

Running time

## Time difference of 5.77795 mins

Other

In the .bib file get rid of:

  • book_section

  • computer_program

  • web_page

Bibliography

Altermatt, Florian, Emanuel A. Fronhofer, Aurélie Garnier, Andrea Giometto, Frederik Hammes, Jan Klecka, Delphine Legrand, et al. 2015. “Big Answers from Small Worlds: A User’s Guide for Protist Microcosms as a Model System in Ecology and Evolution.” Methods in Ecology and Evolution 6: 218–31. https://doi.org/10.1111/2041-210X.12312.
Anderson, Wendy B., and D. A. Wait. 2001. “Subsidized Island Biogeography Hypothesis: Another New Twist on an Old Theory.” Ecology Letters 4: 289–91. https://doi.org/10.1046/j.1461-0248.2001.00226.x.
Benedetti-Cecchi, Lisandro. 2005. “Unanticipated Impacts of Spatial Variance of Biodiversity on Plant Productivity.” Ecology Letters 8 (August): 791–99. https://doi.org/10.1111/j.1461-0248.2005.00780.x.
Colombo, Jessica. 2021. “Local Resource and Resource Flow in Meta- Ecosystems Alter Effects of Disturbance.” University of Zurich.
Gonzalez, Andrew, Rachel M. Germain, Diane S. Srivastava, Elise Filotas, Laura E. Dee, Dominique Gravel, Patrick L. Thompson, et al. 2020. “Scaling-up Biodiversity-Ecosystem Functioning Research.” Ecology Letters 23: 757–76. https://doi.org/10.1111/ele.13456.
Gonzalez, Andrew, Nicolas Mouquet, and Michel Loreau. 2009. “Biodiversity as Spatial Insurance: The Effects of Habitat Fragmentation and Dispersal on Ecosystem Functioning.”
Gratton, Claudio, and M. Jake Vander Zanden. 2009. “Flux of Aquatic Insect Productivity to Land: Comparison of Lentic and Lotic Ecosystems.” Ecology 90 (October): 2689–99. https://doi.org/10.1890/08-1546.1.
Greig, Hamish S., Peter A. McHugh, Ross M. Thompson, Helen J. Warburton, and Angus R. McIntosh. 2022. “Habitat Size Influences Community Stability.” Ecology 103 (January). https://doi.org/10.1002/ecy.3545.
Gülzow, Nils, Yanis Wahlen, and Helmut Hillebrand. 2019. “Metaecosystem Dynamics of Marine Phytoplankton Alters Resource Use Efficiency Along Stoichiometric Gradients.” American Naturalist 193: 35–50. https://doi.org/10.1086/700835.
Halsey, Lewis G. 2019. “The Reign of the p-Value Is over: What Alternative Analyses Could We Employ to Fill the Power Vacuum?” Biology Letters 15. https://doi.org/10.1098/rsbl.2019.0174.
Hedges, Larry V., and Ingram Olkin. 1985. Statistical Methods for Meta-Analysis.
Hocking, Morgan D., and Thomas E. Reimchen. 2009. “Salmon Species, Density and Watershed Size Predict Magnitude of Marine Enrichment in Riparian Food Webs.” Oikos 118 (September): 1307–18. https://doi.org/10.1111/j.1600-0706.2009.17302.x.
Jacquet, Claire, Isabelle Gounand, and Florian Altermatt. 2020. “How Pulse Disturbances Shape Size-Abundance Pyramids.” Ecology Letters 23: 1014–23. https://doi.org/10.1111/ele.13508.
Loder, Natasha, Tim M Blackburn, and Kevin J Gaston. 1997. “The Slippery Slope: Towards an Understanding of the Body Size Frequency Distribution.” Vol. 78. https://www.jstor.org/stable/3545817?seq=1&cid=pdf-.
Loreau, Michel, Nicolas Mouquet, and Robert D. Holt. 2003. “Meta-Ecosystems: A Theoretical Framework for a Spatial Ecosystem Ecology.” Ecology Letters 6: 673–79. https://doi.org/10.1046/j.1461-0248.2003.00483.x.
MacArthur, Robert H., and Edward O. Wilson. 1963. “An Equilibrium Theory of Insular Zoogeography.” Evolution 17 (December): 373. https://doi.org/10.2307/2407089.
Marleau, Justin N., Frédéric Guichard, and Michel Loreau. 2014. “Meta-Ecosystem Dynamics and Functioning on Finite Spatial Networks.” Proceedings of the Royal Society B: Biological Sciences 281. https://doi.org/10.1098/rspb.2013.2094.
Menegotto, André, Thiago Fernando Rangel, Julian Schrader, Patrick Weigelt, and Holger Kreft. 2020. “A Global Test of the Subsidized Island Biogeography Hypothesis.” Global Ecology and Biogeography 29 (February): 320–30. https://doi.org/10.1111/geb.13032.
Naeem, Shahid, Lindsey J. Thompson, Sharon P. Lawler, John H. Lawton, and Richard M. Woodfin. 1994. “Declining Biodiversity Can Alter the Performance of Ecosystems.” Nature 368 (April): 734–37. https://doi.org/10.1038/368734a0.
Nakagawa, Shinichi, Paul C. D. Johnson, and Holger Schielzeth. 2017. “The Coefficient of Determination R2 and Intra-Class Correlation Coefficient from Generalized Linear Mixed-Effects Models Revisited and Expanded.” Journal of the Royal Society Interface 14. https://doi.org/10.1098/rsif.2017.0213.
Pennekamp, Frank, and Nicolas Schtickzelle. 2013. “Implementing Image Analysis in Laboratory-Based Experimental Systems for Ecology and Evolution: A Hands-on Guide.” Methods in Ecology and Evolution 4: 483–92. https://doi.org/10.1111/2041-210X.12036.
Pennekamp, Frank, Nicolas Schtickzelle, and Owen L. Petchey. 2015. “BEMOVI, Software for Extracting Behavior and Morphology from Videos, Illustrated with Analyses of Microbes.” Ecology and Evolution 5: 2584–95. https://doi.org/10.1002/ece3.1529.
Polis, Gary A., and Stephen D. Hurd. 1996. “Linking Marine and Terrestrial Food Webs: Allochthonous Input from the Ocean Supports High Secondary Productivity on Small Islands and Coastal Land Communities.” American Naturalist 147: 396–423. https://doi.org/10.1086/285858.
Post, D. M., M. L. Pace, and N. G. Jr. Hairston. 2000. “Ecosystem Size Determines Food-Chain Length in Lakes.” Nature 405: 1047–49.
Tilman, David, and John A. Downing. 1994. “Biodiversity and Stability in Grasslands.” Nature 367 (January): 363–65. https://doi.org/10.1038/367363a0.
Zuur, Alain F., Elena N. Ieno, Neil Walker, Anatoly A. Saveliev, and Graham M. Smith. 2009. Mixed Effects Models and Extensions in Ecology with r. Mixed Effects Models and Extensions in Ecology with R. Vol. 36. Springer New York. https://doi.org/10.1007/978-0-387-87458-6.